Fact-checked by Grok 2 weeks ago

Network throughput

Network throughput refers to the actual rate at which data is successfully transferred from one point to another across a network within a given time period, typically measured in bits per second (bps), megabits per second (Mbps), or gigabits per second (Gbps). Unlike bandwidth, which represents the theoretical maximum capacity of a network link, throughput accounts for real-world limitations and reflects the effective performance under operational conditions. In benchmarking contexts, it is defined as the maximum rate at which none of the offered frames are dropped by a network device, providing a standardized metric for evaluating interconnection equipment like routers and switches. Key factors influencing network throughput include , which is the delay in data transmission; , where data packets fail to reach their destination; and , which occurs when traffic exceeds available capacity, leading to reduced efficiency. overhead, such as headers added by or , also reduces effective throughput by consuming without contributing to data. For -based connections, throughput is particularly affected by the congestion window size, round-trip time (RTT), and the (), where equilibrium is reached after initial slow-start phases to sustain maximum data rates. These elements collectively determine how closely a network's actual approaches its theoretical limits. Measuring network throughput involves tools and methodologies that capture successful data delivery rates, often using protocols like SNMP for monitoring or packet analyzers such as for detailed traffic inspection. Standardized tests, such as those outlined in RFC 2544, assess throughput by sending frames at varying rates and identifying the highest rate without loss, commonly applied in device testing scenarios. In operational networks, throughput is critical for ensuring (QoS), optimizing , and supporting applications like video streaming or , where consistent performance directly impacts .

Fundamentals

Definition and Scope

Network throughput refers to the rate at which data is successfully transmitted from one point to another over a communication , excluding any retransmissions or errors, and is typically measured in bits per second (bps). This metric captures the effective data delivery rate in the presence of real-world constraints such as overhead, , and limitations, distinguishing it from raw transmission capacity. In essence, it quantifies the usable performance of a link or system for end-to-end data transfer. The scope of network throughput primarily encompasses digital communication networks, including local area networks (LANs), wide area networks (WANs), and the broader infrastructure, where data is exchanged via packetized formats across shared or dedicated channels. While analogous concepts exist in non-network domains, such as throughput in computing hardware, the term in this context is confined to networked environments involving multiple nodes and potential sources. A basic prerequisite for understanding throughput is familiarity with data packets—discrete units of information routed independently—and transmission channels, which serve as the physical or logical pathways for data propagation. The concept of throughput originated in manufacturing processes, where it measured units produced per unit of time, later evolving to describe efficient data handling in communication systems during the mid-20th century. A pivotal advancement came with Claude Shannon's 1948 paper, "," which established the theorem as the theoretical upper limit on reliable data transmission rates over noisy channels, laying the foundational principles for modern analysis. This theorem marked a key evolution, shifting focus from ideal conditions to practical limits influenced by noise and , thereby influencing throughput metrics in subsequent network designs.

Units and Measurement

Network throughput is conventionally measured in bits per second (bps), reflecting the rate of successful data transmission over a . This unit aligns with the binary nature of digital data transmission, where information is encoded in bits. Common prefixes scale the measurement for higher capacities: kilobits per second (Kbps = 10^3 bps), megabits per second (Mbps = 10^6 bps), gigabits per second (Gbps = 10^9 bps), and terabits per second (Tbps = 10^12 bps). In some contexts, particularly storage or application-level reporting, throughput is expressed in bytes per second (Bps), where 1 byte equals 8 bits, so 1 Bps = 8 bps; this conversion accounts for the grouping of bits into octets for data handling. Empirical measurement of throughput employs specialized tools and protocols to quantify data transfer rates. For controlled testing in laboratory settings, —a widely used open-source tool—generates synthetic traffic (via , , or SCTP) between endpoints to assess maximum achievable , reporting results in bits per second or bytes per second over configurable intervals. In operational environments, (SNMP) enables ongoing monitoring through (MIB) objects like ifInOctets and ifOutOctets, which track cumulative bytes received and transmitted; throughput is derived by calculating the delta in these counters over a polling interval and multiplying by 8 to convert to bps. Laboratory measurements typically involve steady, unidirectional traffic for repeatable baselines, whereas real-world monitoring captures dynamic patterns, including variable loads and protocols, often revealing lower effective rates due to environmental factors. Standardization of throughput units and measurement practices is established by bodies like the Internet Engineering Task Force (IETF) and the Institute of Electrical and Electronics Engineers (IEEE). IETF RFCs, such as RFC 1242 (Benchmarking Terminology) and RFC 2544 (Benchmarking Methodology), define throughput as the maximum rate of packet transfer without loss, expressed in bps or packets per second, with methodologies emphasizing frame size distributions and trial repetitions for accuracy. IEEE standards, including 802.3 for Ethernet, similarly specify link speeds and capacities in bps, ensuring interoperability across local area networks. However, measurements can introduce errors in bursty traffic scenarios, where short-term spikes lead to high variability; IETF guidance in RFC 7640 highlights how such patterns stress management functions and necessitate robust averaging to mitigate inaccuracies. To address temporal variability, throughput is frequently reported as an average over defined time , such as 1 second, smoothing fluctuations from intermittent or bursty flows. For instance, in tests, is computed as the total transferred data divided by the test duration (default 10 seconds), with optional periodic reports every to track changes; similarly, SNMP-derived rates use polling periods (e.g., 1-5 minutes) to compute averages, balancing with reduced overhead. This approach provides a stable metric for comparison, though shorter may amplify noise from bursts, while longer ones obscure transient issues.

Theoretical Maximums

Maximum Theoretical Throughput

The maximum theoretical throughput in a communication channel is defined by the Shannon-Hartley theorem, which establishes the upper limit on the rate at which information can be reliably transmitted over a noisy channel. This theorem, formulated by Claude Shannon in 1948, quantifies the channel capacity C in bits per second (bps) as C = B \log_2(1 + \text{SNR}), where B is the channel bandwidth in hertz (Hz) and \text{SNR} is the signal-to-noise ratio, a dimensionless measure of signal power relative to noise power. The derivation of this formula assumes an (AWGN) channel model, where noise is uncorrelated, has a flat power , and follows a Gaussian distribution; it further posits that transmission occurs over infinite time with optimal coding schemes that approach the limit arbitrarily closely but never exceed it. Under these conditions, the theorem proves that reliable communication is possible at rates up to C, but any higher rate leads to an unavoidable error probability. To illustrate, consider a with B = 1 MHz ($10^6 Hz) and \text{SNR} = 30 dB, equivalent to a power ratio of $10^{30/10} = 1000. The is calculated as C = 10^6 \log_2(1 + 1000) = 10^6 \log_2(1001). Since \log_2(1001) \approx 9.97 (computed via \log_2(x) = \ln(x)/\ln(2), with \ln(1001) \approx 6.908 and \ln(2) \approx 0.693), C \approx 9.97 Mbps. This example demonstrates how scales logarithmically with SNR while linearly with , providing a fundamental benchmark for design. While the Shannon-Hartley theorem sets an idealized upper bound, it relies on perfect assumptions and error-free coding efficiency, which real-world channels rarely achieve due to non-ideal noise distributions and practical coding limitations.

Asymptotic Throughput

In the high (SNR) regime, where SNR ≫ 1, the throughput of an (AWGN) channel asymptotically approaches C \approx B \log_2 (\text{SNR}), with B denoting the . This behavior arises from the Shannon capacity formula C = B \log_2 (1 + \text{SNR}), which simplifies under the high-SNR approximation by neglecting the 1 relative to SNR. In wideband regimes, a pre-log factor emerges to characterize the number of independent signaling dimensions, influencing the scaling of throughput with SNR; for single-input single-output systems, this factor is 1, but it generalizes to higher values in advanced configurations. In the low-SNR regime, corresponding to power-limited conditions where SNR ≪ 1, the throughput scales linearly with transmit P but becomes independent of B for fixed total power. Specifically, C \approx \frac{P}{N_0} \log_2 e, where N_0 is the spectral , highlighting that additional does not yield proportional gains when power is constrained. This linear dependence on underscores the focus in such scenarios, with each doubling of power approximately doubling the achievable rate. Multi-carrier techniques like (OFDM) extend these asymptotic models to frequency-selective channels in modern wireless systems, such as and , by dividing the channel into parallel flat-fading subchannels that collectively approach the AWGN capacity bounds. With adaptive power allocation via waterfilling across subcarriers, OFDM systems can closely attain the theoretical asymptotic throughput, minimizing the gap to the limit in both high- and low-SNR conditions. In systems, the concept of further refines the high-SNR asymptote, where throughput grows as C \approx \min(N_t, N_r) B \log_2 (\text{SNR}), with N_t and N_r representing the number of transmit and receive antennas, respectively. This gain, or pre-log factor of \min(N_t, N_r), captures the additional parallel channels enabled by spatial separation, fundamentally enhancing asymptotic performance over single-antenna setups.

Practical Performance

Peak Measured Throughput

Peak measured throughput refers to the highest instantaneous achieved in a under controlled, ideal conditions, such as short-burst transmissions with minimal and no competing traffic. This metric captures the upper limit of during brief, optimized operations, distinct from sustained rates over longer periods. Such peaks are typically measured using specialized benchmarking tools like Netperf, which conducts unidirectional throughput tests across TCP and UDP protocols to quantify maximum achievable rates without external interference. In laboratory settings, these measurements often involve single-stream transfers over dedicated links to isolate hardware and protocol capabilities. For Ethernet networks, peak measured throughput on 10 GbE interfaces has reached approximately 9.24 Gbps using UDP over IPv4 in controlled tests with optimized cabling like CAT8. In (IEEE 802.11ax) environments, lab measurements under ideal conditions with 160 MHz channels and multiple spatial streams have approached the advertised maximum of 9.6 Gbps, though real-world peaks are often lower due to environmental variables. Recent fiber-optic trials in 2024 demonstrated peaks of 400 Gbps over distances exceeding 18,000 km on subsea cables, leveraging coherent for high-capacity wavelengths. Achieving these peaks requires factors like buffer optimization to handle bursty traffic efficiently and environments with near-zero to prevent retransmissions. As of 2025, 6G prototypes have recorded peaks over 100 Gbps using integrated photonic chips across multiple frequency bands, surpassing theoretical maxima of prior generations in sub-THz tests. However, such peak rates are rarely sustained, as they depend on fleeting ideal conditions and quickly degrade with any protocol overhead or contention.

Maximum Sustained Throughput

Maximum sustained throughput refers to the steady-state data transfer rate that a can reliably maintain over extended periods under operational loads, reflecting long-term performance after initial transients like slow start have subsided. This metric captures the where the operates consistently without significant degradation, often limited by protocol behaviors and resource constraints. In streams, sustained throughput typically achieves 90-95% of the link speed in optimized setups, such as when receive window sizes exceed the to avoid bottlenecks. For instance, the framework for throughput testing emphasizes measuring this state to ensure buffers fully utilize available . In enterprise LANs, links commonly sustain around 940 Mbps using , representing about 94% of the 1 Gbps nominal rate after accounting for headers and inter-frame gaps, though this can vary with configuration details like frame sizes. (FEC) plays a key role in upholding these rates on error-prone paths by embedding parity data to reconstruct lost packets, reducing retransmission overhead and preserving steady flow—particularly vital in high-speed or wireless extensions of enterprise networks. Testing for maximum sustained throughput involves long-duration benchmarks, such as sessions lasting 10 minutes or longer, to verify stability beyond short bursts and capture effects like buffer saturation. Real-world limitations, including routing dynamics, further influence these rates; BGP convergence, which can take seconds to minutes during failures, temporarily disrupts path stability and caps sustained performance until alternate routes propagate. As of 2025, 5G mmWave deployments in urban areas demonstrate sustained throughputs averaging several hundred Mbps, with field trials achieving over 2 Gbps downlink under favorable conditions, though dense environments often yield lower averages due to interference and mobility.

Efficiency Metrics

Channel Utilization

Channel utilization, also known as link utilization, is defined as the ratio of the actual data throughput achieved over a communication channel to the channel's maximum capacity, expressed as a percentage. This metric quantifies how effectively the available bandwidth is employed for productive data transmission, with values below 100% indicating periods of idle time or inefficiency in channel usage. The standard formula for channel utilization U in a single-channel model is given by U = \left( \frac{\text{Throughput}}{\text{Capacity}} \right) \times 100\%, where throughput represents the effective data rate and capacity is the theoretical maximum bit rate of the channel. A primary cause of underutilization is idle time resulting from delays, particularly in protocols like stop-and-wait, where the sender must await before transmitting the next packet, leaving the channel unused during the round-trip time (RTT). In such scenarios, utilization drops significantly when the delay exceeds the transmission time, as quantified by the a = \frac{\text{[propagation](/page/Propagation) time}}{\text{[transmission](/page/Transmission) time}}, leading to U = \frac{1}{1 + 2a} for error-free stop-and-wait operation. For example, in satellite links with geosynchronous orbits, the one-way delay is approximately 250 ms, resulting in an RTT of around 500-560 ms, which causes slow-start mechanisms to underutilize the channel, often achieving less than 10% utilization in basic configurations due to prolonged idle periods. Similarly, in early Ethernet networks using CSMA/CD (Carrier Sense Multiple Access with ), channel utilization is reduced by collisions and backoff delays; the efficiency approximates U = \frac{1}{1 + 6.4a} under light load, where a is the ratio of to , leading to maximum utilizations around 80-90% in typical 10 Mbps setups but dropping lower with increasing contention. To mitigate these issues, pipelining techniques, such as those employed in sliding window protocols (e.g., Go-Back-N or Selective Repeat), allow multiple packets to be in transit simultaneously, overlapping transmission with propagation and acknowledgment delays to approach 100% utilization when the window size W satisfies W \geq 1 + 2a. In (SDN), dynamic allocation further enhances utilization by centrally optimizing user associations and assignments based on conditions, achieving up to 30% higher throughput in dense environments compared to static methods. This approach is particularly effective in single- models by reducing interference and idle slots through software-controlled reconfiguration.

Throughput Efficiency

Throughput efficiency quantifies the effectiveness of a in delivering relative to its theoretical , accounting for various losses. It is formally defined as the ratio of the achieved throughput to the theoretical maximum throughput, expressed as a : \eta = \left( \frac{R_{\text{achieved}}}{R_{\text{theoretical}}} \right) \times 100\% where R_{\text{achieved}} represents the actual rate observed under operational conditions, and R_{\text{theoretical}} is the ideal limit, such as the Shannon capacity for a given . This incorporates both coding gains, which enhance reliability and potentially increase effective throughput by reducing retransmissions, and coding losses from overhead that diminish the net rate. A key metric for assessing throughput efficiency is , measured in bits per second per hertz (bps/Hz), which evaluates how densely information is packed into the available spectrum. For instance, in ideal uncoded conditions with minimal error correction, (QAM) schemes achieve log2(M) bps/Hz, where M is the number of symbols: 64-QAM yields 6 bps/Hz, while 256-QAM reaches 8 bps/Hz. However, in practical systems like 3.0, overheads reduce these to approximately 4.15 bps/Hz for 64-QAM (upstream) and 6.33 bps/Hz for 256-QAM (downstream). As of 2025, and emerging standards achieve higher spectral efficiencies, with 1024-QAM enabling up to 10 bps/Hz in mmWave bands under low error conditions, further enhanced by AI-driven resource allocation. These values highlight the trade-off between modulation order and robustness, as higher-order QAM improves efficiency but requires better signal-to-noise ratios to maintain low error rates. Factors influencing throughput efficiency include overhead from (FEC), which adds redundant bits to combat errors but reduces the effective payload fraction, thereby lowering net throughput by 10-20% depending on the code rate. In contemporary applications, particularly in 2025 networks, AI-driven optimizations mitigate such losses by dynamically adjusting FEC parameters and for real-time workloads, such as high-frequency financial data streaming, achieving up to 15-20% gains in efficiency over traditional methods. Additionally, the concept of the Pareto frontier describes the optimal trade-offs in multi-objective scenarios, such as balancing throughput efficiency against in routing protocols, where no single configuration improves one without degrading the other, guiding designs in delay-tolerant networks.

Influencing Factors

Protocol Overhead and Limitations

Network protocols introduce various forms of overhead that diminish the effective throughput by consuming and introducing delays, primarily through header information, control mechanisms, and error recovery processes. Header overhead arises from the inclusion of metadata in each packet, such as addressing, sequencing, and checksums; for instance, the / stack typically adds 40 bytes for (20 bytes header plus 20 bytes ) or up to 60 bytes for , reducing the usable relative to the total packet size. In unreliable channels prone to , protocols like trigger retransmissions to ensure reliability, which further erode throughput by duplicating data transmission and increasing contention on the medium. Specific protocols exemplify these limitations. TCP's congestion control, such as the Reno variant, employs a sawtooth pattern in its congestion window adjustment—doubling during slow start and congestion avoidance, then halving upon loss detection—which results in average link utilization of approximately 75% of the available under steady-state conditions, as the window oscillates between the threshold and half that value. In contrast, minimizes overhead with an 8-byte header, offering lower per-packet costs and no built-in reliability or congestion control, which can yield higher raw throughput in loss-tolerant applications like streaming, though it risks without recovery. The impact of header overhead on effective throughput can be quantified using the formula: \text{Effective Throughput} = \left( \frac{\text{Payload Size}}{\text{Payload Size} + \text{Header Size}} \right) \times \text{Raw Bit Rate} This expression highlights how fixed header sizes penalize smaller payloads more severely; for example, with a 1500-byte MTU, Ethernet frame, and 40-byte TCP/IP headers, the efficiency is about 97% for a 1460-byte payload. The Maximum Transmission Unit (MTU) plays a critical role here, as larger MTUs (e.g., 9000 bytes in jumbo frames) reduce the relative overhead per packet, allowing fewer packets to achieve the same data volume and thus improving overall throughput by minimizing header repetition. Modern protocols address some TCP limitations; the QUIC protocol, standardized in the 2010s and built over , integrates transport and security handshakes to reduce connection establishment overhead, achieving 10-20% lower latency in scenarios compared to /TLS, which indirectly boosts sustained throughput by avoiding .

Hardware and Physical Constraints

Network throughput is fundamentally constrained by the physical properties of transmission media and hardware components, which impose limits on and processing capacity. In analog systems, signal occurs as electromagnetic waves propagate through like cables, where and dielectric losses cause the signal to decrease exponentially with , reducing the signal-to-noise ratio (SNR) and thereby limiting achievable data rates. This is exacerbated by environmental factors such as variations, but its primary impact is a degradation in throughput beyond certain lengths, as weaker signals require more error correction or retransmissions. Additionally, thermal noise, arising from the random motion of electrons in conductors at (approximately 290 K), establishes a fundamental of -174 dBm/Hz, below which signals become indistinguishable from noise, capping the maximum information transfer rate per Shannon's capacity formula. Integrated circuit (IC) hardware in network devices further delineates throughput boundaries through processing limitations. Clock speeds determine the rate at which data can be serialized and deserialized; for instance, higher clock frequencies enable faster packet handling but are bounded by signal propagation delays within the silicon, typically limiting core routers to frequencies around 1-2 GHz without advanced cooling. Buffer sizes in routers and switches also play a critical role, as insufficient buffering leads to packet drops during bursts, reducing effective throughput; the optimal size is often tuned to the bandwidth-delay product of the link, but oversized buffers introduce latency. Application-specific integrated circuits (ASICs) outperform field-programmable gate arrays (FPGAs) in router throughput due to their customized pipelines, achieving up to 10-20% higher packet processing rates at equivalent power levels, though FPGAs offer flexibility for evolving standards at the cost of lower peak performance. Transmission media exemplify these constraints in practice. For cabling, Category 6 (Cat6) twisted-pair supports a maximum of 10 Gbps over 55 meters, beyond which exceeds tolerable limits, necessitating lower speeds like 1 Gbps up to 100 meters per TIA-568 standards. In contrast, fiber optic cables mitigate distance-related through low-loss silica cores (around 0.2 dB/km at 1550 nm), allowing theoretically unlimited throughput extension via erbium-doped fiber amplifiers (EDFAs) that boost signals every 80-100 km without converting to electrical domains, enabling terabit-per-second rates over transoceanic distances. Extensions of into 2025 have facilitated 100 Gbps Ethernet chips with transistor densities exceeding 100 billion per die, but power dissipation—reaching 100-200 W per chip—imposes scaling limits, as heat generation outpaces cooling advancements and threatens reliability.

Multi-User and Environmental Effects

In multi-user environments, network throughput is significantly degraded by contention mechanisms designed to manage shared medium access. In networks employing with Collision Avoidance (CSMA/CA), overlapping transmissions from multiple devices lead to unequal channel access opportunities, resulting in performance degradation and long-term unfairness among nodes. This contention causes nodes with fewer interferers to dominate the medium, reducing overall throughput by increasing collision probabilities and backoff delays, with simulations showing improvements of up to 80% when mitigated through adaptive backoff schemes, albeit with minor throughput trade-offs. In cellular networks, multi-user scheduling via (OFDMA) allocates resource blocks to mitigate contention, dynamically assigning frequency-time chunks based on channel quality to balance throughput and fairness. Algorithms such as proportionally fair scheduling select active users per time-slot and optimize power and distribution, achieving near-optimal utilities that enhance system throughput—for instance, suboptimal methods yield utilities around 54,000 in simulations with 40 users, compared to baseline integer allocations. Environmental factors further exacerbate throughput degradation through signal propagation challenges. , a model for non-line-of-sight multipath environments, introduces time-varying losses that correlate packet errors, reducing throughput as Doppler spread increases (e.g., from 10 Hz to 30 Hz), with steady-state models showing drops tied to higher state transition frequencies and optimal packet lengths around 1000 bytes to minimize overhead. Mobility-induced Doppler shift compounds this by causing frequency offsets that distort signals, particularly in high-speed scenarios; for networks under random mobility, increasing user speeds from 1 m/s to 10 m/s elevates bit error rates and lowers throughput due to inter-symbol interference. Advanced techniques like in networks address multi-user and environmental losses by spatially directing signals to improve (SINR), enabling simultaneous transmissions that boost capacity and mitigate from mobility or . In dense deployments, this can enhance network efficiency, with implementations increasing overall capacity by approximately 50% through null-forming to suppress non-target . Conversely, intentional attacks, such as reactive or sweeping , can drastically reduce throughput by overwhelming the medium; game-theoretic analyses show that optimal strategies may decrease network throughput by up to 90% in systems. To evaluate multi-user throughput allocation, fairness metrics like Jain's index quantify resource equity, defined as f(\mathbf{x}) = \frac{(\sum_{i=1}^n x_i)^2}{n \sum_{i=1}^n x_i^2}, where x_i is the normalized throughput for user i and n is the number of users, yielding for perfect and approaching for severe disparity. This index, independent of population size and scale, relates inversely to the variance of allocations (f = \frac{1}{1 + \text{COV}^2}), guiding scheduling to prevent in contended environments.

Goodput

Definition and Distinction from Throughput

Goodput refers to the application-level throughput of a communication, representing the number of useful information bits delivered by the to the application per unit of time. It is measured in bits per second (bps) and specifically accounts for the successfully received, excluding headers, retransmissions, and erroneous bits. This emphasizes the effective rate at which an application can utilize the transferred , ignoring all forms of and overhead that do not contribute to the end-user . In contrast, network throughput—often simply called throughput—measures the total rate of bits transmitted over the network link, encompassing both the useful payload and all associated overhead, such as packet headers, acknowledgments, control traffic, and any retransmitted data due to losses or errors. Goodput is therefore always a subset of throughput, as it filters out non-payload elements to focus solely on the application-useful data delivered without duplication or loss. This distinction is critical in performance analysis, where throughput might appear high due to retransmissions or verbose protocols, but goodput reveals the actual efficiency experienced by the application. For instance, during an HTTP over , the throughput includes the full volume of data sent across the wire, incorporating sequence numbers, acknowledgment packets, and headers, along with any segments retransmitted due to . The , however, is limited to the rate at which the file's content bytes are successfully assembled and passed to the HTTP application, excluding all / overhead and duplicates. Encryption layers, such as those provided by TLS, introduce additional overhead through cipher expansions and handshake messages, further differentiating from the underlying transport throughput; since its standardization in 2018, TLS 1.3 has mitigated some of this by reducing handshake rounds and eliminating legacy cipher options, thereby improving overall protocol efficiency.

Calculation and Overhead Impact

Goodput is calculated by adjusting the overall network throughput to account for the proportion of useful data and the impact of losses, providing a measure of the effective application-level data rate. The standard formula is: \text{Goodput} = \text{Throughput} \times \left( \frac{\text{Payload size}}{\text{Total packet size}} \right) Here, the payload fraction represents the ratio of application data to the total packet including headers. This approach ensures goodput excludes non-useful elements like headers; losses further reduce goodput, as retransmissions do not contribute to useful data delivery. To evaluate goodput in complex scenarios, discrete-event simulators such as ns-3 are widely employed, enabling researchers to track application-layer data reception over simulated time intervals and compute metrics like bytes of useful data per second. Overhead sources significantly degrade goodput by consuming bandwidth and resources without advancing useful data transfer, and these can be categorized by network layer. At the transport layer, TCP acknowledgment (ACK) packets represent a key overhead, as they require transmission for reliability but carry no payload; studies in wireless access networks demonstrate that suppressing unnecessary ACKs can boost TCP goodput by approximately 50% under high-load conditions. In the network layer, routing protocols introduce control messages for path discovery and maintenance, which dilute the fraction of packets carrying application data and thereby lower goodput, as evidenced by comparisons showing reduced successful TCP delivery ratios in mobile ad-hoc networks. Application-layer overhead, such as the serialization and parsing of data formats like JSON, adds processing demands that indirectly limit goodput by increasing end-to-end delays, even though it occurs outside the wire; this is particularly pronounced in bandwidth-constrained environments where inefficient encoding inflates effective transmission costs. These overheads not only reduce the efficiency but also amplify , as queued control packets and processing steps delay the delivery of time-sensitive data, exacerbating issues in systems. For instance, in VoIP applications, jitter buffers mitigate packet arrival variations by holding incoming audio packets for 20 to 200 milliseconds, smoothing playback but introducing additional that diminishes the effective for live streams by necessitating larger buffers and potential discards. In constrained networks, protocols like CoAP demonstrate superior efficiency over HTTP by minimizing header overhead and leveraging for lighter transmission, with evaluations in dynamic topologies revealing higher delivery rates and throughput, making CoAP preferable for battery-limited devices.

Applications Across Network Types

Wired and Optical Networks

In wired networks, Ethernet standards defined by IEEE 802.3 enable high-throughput data transmission over twisted-pair copper cabling, with recent advancements supporting speeds up to 800 Gbps by 2025 to meet escalating bandwidth demands in data centers and enterprise environments. For instance, the IEEE 802.3bt standard, ratified in 2018, facilitates Power over Ethernet (PoE) delivery of up to 90-100 W per port alongside data rates that can reach multi-gigabit levels on compatible cabling, powering devices like high-performance access points without separate power infrastructure. However, crosstalk—unwanted signal interference between adjacent wire pairs—imposes key limitations, particularly near-end crosstalk (NEXT) which degrades signal integrity at higher frequencies and longer distances, necessitating shielded cabling or Category 8 standards to maintain throughput over 100 meters. Optical networks, leveraging fiber-optic guided media, achieve vastly superior throughput due to their immunity to electromagnetic interference and support for dense wavelength-division multiplexing (DWDM), which aggregates multiple wavelengths on a single to deliver terabits per second (Tbps) in total . In DWDM systems, up to 192 channels each carrying 100 Gbps can yield an aggregate of 19.2 Tbps, enabling backbone networks to handle massive data volumes for cloud and AI applications. Key impairments include chromatic dispersion, which causes pulse broadening over distance due to varying light speeds across wavelengths, and nonlinear effects like , where interactions between signals generate unwanted frequencies, both of which reduce effective throughput unless mitigated by dispersion-compensating s or advanced . In data centers, sustained throughput of 400 Gbps per link has become commonplace, as demonstrated by service providers offering Ethernet connectivity at this rate across multiple facilities to support workloads and high-speed interconnects. Optical systems routinely achieve bit error rates (BER) below 10^{-12}, ensuring reliable over thousands of kilometers through (FEC) that corrects errors from residual and impairments. Recent coherent advancements, such as those demonstrated in , enable 1.2 Tbps per using probabilistic constellation shaping and high-spectral-efficiency , extending high-capacity over ultra-long distances like 3,050 km.

Wireless and Cellular Networks

In wireless networks, such as those based on standards, throughput is significantly influenced by spectrum availability, schemes, and multi-user access techniques. The latest iteration, 7 (), achieves a theoretical peak throughput of approximately 46 Gbps through enhancements like 4096-QAM , wider 320 MHz channels, and multi-link operation (MLO), which allows simultaneous transmission across multiple frequency bands. Multi-user multiple-input multiple-output (MU-MIMO) further amplifies these gains by enabling for up to 16 spatial streams, supporting concurrent data streams to multiple devices and improving aggregate throughput in dense environments by up to 4 times compared to single-user in prior standards. These features address the inefficiencies of contention-based access in shared wireless mediums, where and mobility can otherwise degrade effective rates. Cellular , particularly New Radio (NR), leverage frequency ranges to balance coverage and capacity, with throughput varying markedly by band. In sub-6 GHz frequencies (FR1), peak downlink throughput reaches up to 4 Gbps using 100 MHz bandwidth, 256-QAM, and 8-layer , providing reliable performance for urban mobility scenarios. In contrast, millimeter-wave (mmWave) bands () enable peaks of 20 Gbps with 400 MHz channels and higher-order modulation, though limited by shorter range and susceptibility to blockages. Handovers between cells, essential for maintaining connectivity in mobile users, introduce temporary throughput disruptions; for instance, congestion window reductions average 48% post-handover, with recovery times up to 6.7 seconds, impacting applications. Key challenges in these radio-based systems stem from propagation characteristics, including —which increases with frequency and distance, following models like free-space or log-distance—and shadowing from obstacles, which introduces variability in signal strength and can reduce achievable throughput by 10-20 dB in urban settings. (CA) mitigates such limitations by combining multiple component carriers across bands, boosting throughput by 2-3 times in LTE-Advanced and configurations, as specified in standards for enhanced spectral efficiency. As of 2025, visions for networks emphasize (THz) bands (0.1-10 THz) to target ultra-high throughputs exceeding 1 Tbps, leveraging vast unlicensed for applications like holographic communications, though challenges like molecular and remain under active by bodies such as IEEE and ITU.

Analog and Legacy Systems

In analog network systems, such as dial-up modems over traditional telephone lines, throughput is fundamentally constrained by the physical characteristics of the twisted-pair wiring and the need to modulate onto s. The V.92 , an enhancement to V.90, enables downstream data rates up to 56 kbit/s and upstream rates up to 48 kbit/s by leveraging (PCM) from the central office, where the is digitized at the to prevent . The Nyquist-Shannon sampling theorem dictates that for a voiceband signal with a of approximately 4 kHz (300–3400 Hz), sampling must occur at least at 8 kHz to accurately reconstruct the signal, limiting the effective and thus throughput in these systems. Legacy digital hybrid systems, like early DSL variants, build on analog infrastructure but introduce discrete multi-tone (DMT) modulation to achieve higher speeds over existing copper lines. For instance, VDSL2, standardized under G.993.2, supports aggregate throughput up to 100 Mbit/s downstream and 50 Mbit/s upstream over distances up to 300 meters, using advanced profiles with (QAM) schemes, including simpler forms like QPSK for robust upstream transmission in noisy environments. These systems represent a bridge from pure analog, where like QPSK encodes two bits per symbol to balance error rates and data efficiency on legacy phone lines. Throughput in these analog and legacy setups is further limited by quantization noise introduced during analog-to-digital conversion, which adds error equivalent to about 6 dB of (SNR) degradation per bit of resolution, capping achievable rates below theoretical maxima. As networks migrated toward fully digital architectures, technologies like 3.1 for cable modems enabled downstream throughput up to 10 Gbit/s by utilizing (OFDM) over coaxial lines, marking a significant evolution from analog constraints. In niche and historical contexts, such as rural areas with extensive legacy copper infrastructure, G.fast (ITU-T G.9701) has seen renewed deployment by 2025 to deliver up to 1 Gbit/s over short copper loops (under 100 meters), reducing the without full fiber replacement. This revival leverages existing phone wiring for gigabit access in underserved regions, prioritizing cost-effective upgrades over new installations.

References

  1. [1]
    What is throughput? - TechTarget
    Feb 11, 2025 · Throughput is a measure of how many units of information a system can process in a given amount of time.Missing: authoritative | Show results with:authoritative
  2. [2]
    RFC 1242: Benchmarking Terminology for Network Interconnection ...
    Issues: See Also: policy based filtering (3.13) 3.17 Throughput Definition: The maximum rate at which none of the offered frames are dropped by the device.
  3. [3]
    RFC 6349 - Framework for TCP Throughput Testing - IETF Datatracker
    This framework describes a practical methodology for measuring end- to-end TCP Throughput in a managed IP network.
  4. [4]
    What Is Throughput in Networking? Bandwidth Explained - DNSstuff
    Network throughput is how much data can be transferred from source to destination within a given timeframe, measuring successful packet arrivals.Missing: authoritative | Show results with:authoritative
  5. [5]
    What is Throughput? | Webopedia
    Apr 12, 2024 · WIthin networking, throughput was originally conceived to evaluate the productivity of computer processors. This was calculated in terms of ...
  6. [6]
    Lecture 16: Performance and Queueing theory - Washington
    (network) throughput: the maximum amount of data that can be sent per second. network throughput is a function of the physical construction of the network ...
  7. [7]
    What is Throughput | Glossary - CyberGhost VPN
    The concept of throughput originated in manufacturing, where it was used to measure the efficiency of production processes. Initially, it focused on the number ...
  8. [8]
    [PDF] A Mathematical Theory of Communication
    In the present paper we will extend the theory to include a number of new factors, in particular the effect of noise in the channel, and the savings possible ...
  9. [9]
    Explained: The Shannon limit | MIT News
    and set its research agenda for the next 50 years.
  10. [10]
    RFC 3511 - Benchmarking Methodology for Firewall Performance
    This document provides methodologies for the performance benchmarking of firewalls. It covers four areas: forwarding, connection, latency and filtering.
  11. [11]
  12. [12]
    iPerf - iPerf3 and iPerf2 user documentation
    ### Summary: How iPerf Measures and Reports Throughput, Units, Averaging
  13. [13]
    RFC 1242: Benchmarking Terminology for Network Devices
    This memo discusses and defines a number of terms that are used in describing performance benchmarking tests and the results of such tests.
  14. [14]
    IEEE 802.3-2022 - IEEE SA
    Jul 29, 2022 · Ethernet local area network operation is specified for selected speeds of operation from 1 Mb/s to 400 Gb/s using a common media access control (MAC) ...Missing: throughput | Show results with:throughput
  15. [15]
    RFC 7640 - Traffic Management Benchmarking - IETF Datatracker
    ... bursty traffic, which stress traffic management functions. The actual ... throughput measurement/metrics for that application. Then, there is the case ...
  16. [16]
    [PDF] 5 Capacity of wireless channels - Stanford University
    Diversity has a significant effect at high SNR (as already seen in. Chapter 3), but can be more important at low SNR. Intuitively, the impact of the randomness ...
  17. [17]
    [PDF] a fundamental tradeoff in multiple-antenna channels
    Abstract—Multiple antennas can be used for increasing the amount of diversity or the number of degrees of freedom in wire- less communication systems.
  18. [18]
    The Netperf Homepage - GitHub Pages
    Netperf is a benchmark that can be used to measure the performance of many different types of networking. It provides tests for both unidirectional throughput, ...
  19. [19]
    Network Optimization Strategies: Optimize Network Performance
    Rating 4.9 (161) Aug 2, 2024 · Optimize Network Settings: Adjusting network settings such as packet size, buffer sizes, and Quality of Service (QoS) settings can help to ...
  20. [20]
    [PDF] Evaluation of 10G Ethernet under various environments
    Oct 10, 2024 · The maximum throughput results for 10Gb Ethernet are 9.2366 Gbps for UDP with IPv4 and CAT8 crossover copper cable. The minimum throughput is 6 ...Missing: peak | Show results with:peak<|separator|>
  21. [21]
    What is Wi-Fi 6? (802.11ax), How Fast is it, and What are its Benefits?
    Apr 30, 2024 · It provides speeds of up to 6.8 Gbps, while Wi-Fi 6 speed is at 9.6 Gbps. Wi-Fi 6 also provides better connectivity for a myriad of network- ...
  22. [22]
    AARNet Trials 400Gbps Transmission over 18,400km on Indigo with ...
    Apr 19, 2024 · Record breaking 400Gbps distance of 18,400Km: 4,600km on Indigo Central (Sydney to Perth) plus 4,600km on Indigo West (Perth to Singapore) ...
  23. [23]
    Scientists develop the world's first 6G chip, capable of 100 Gbps ...
    Sep 1, 2025 · It will offer benefits such as ultra-high-speed connectivity, ultra-low latency and AI integration that can manage and optimize networks in real ...Missing: peak | Show results with:peak
  24. [24]
  25. [25]
    What is the actual maximum throughput on Gigabit Ethernet?
    Jan 22, 2018 · The approximate throughput for Gigabit Ethernet without jumbo frames and using TCP is around 928Mbps or 116MB/s.Gigabit Ethernet Physical... · Frames, Preamble, Interframe... · Wireless Links With Gigabit...
  26. [26]
    Using Forward Error Correction with Network Access - MyF5 | Support
    FEC enables recovery of lost packets to avoid retransmission and increase throughput on lossy links. FEC is frequently used when retransmission is not possible ...<|separator|>
  27. [27]
    Test Network Throughput, Delay-Latency, Jitter, Transfer Speeds ...
    Complete guide on measuring LAN, WAN & WiFi network link performance, throughput, Jitter and network latency. Discover your network's optimum TCP window ...
  28. [28]
    Is 5G mmWave Technology Paving the Way for Next-Level ...
    According to the latest news reported in July 2023, Nokia said it has achieved sustained average downlink speeds of over 2 Gbps using mmWave spectrum and 5G ...
  29. [29]
    What Is Network Utilization & How to Monitor It - Nile Secure
    Network utilization refers to the amount or percentage of available network capacity that is currently being used. It's often expressed as a percentage.
  30. [30]
    RFC 2488: Enhancing TCP Over Satellite Channels using Standard Mechanisms
    ### Summary of Poor Utilization in Satellite Networks Due to Delay (RFC 2488)
  31. [31]
    [PDF] Communication and Networking Flow Control Basics - spinlab
    ▻ W = 1 sliding-window is the same as stop-and-wait. ▻ Higher values of W typically achieve better link utilization (assuming no frame errors). D. Richard Brown ...
  32. [32]
    Efficiency of CSMA/CD - GeeksforGeeks
    Oct 27, 2022 · In CSMA/CD, for success, only 1 station should transmit while others shouldn't. Let p be the probability to transmit data successfully.
  33. [33]
  34. [34]
    [PDF] Throughput and Energy Efficiency Analysis of Small Cell Networks ...
    Jun 26, 2013 · Thus, the network energy efficiency is defined as the ratio of the network throughput to the power consumption per unit area, given by [33].
  35. [35]
    Forward Error-Correction - an overview | ScienceDirect Topics
    However, in a reasonably error-free environment, FEC creates unnecessary overhead that reduces the throughput. Therefore, the packet definitions have been ...
  36. [36]
    [PDF] THE SPECTRAL EFFICIENCY OF DOCSIS® 3.1 SYSTEMS - BME-HIT
    This yields a system spectral efficiency of 6.33 bps/Hz at QAM 256 modulation order. The QAM-‐independent DS system efficiency is. 6.33/8 = 0.7914 sps/Hz.
  37. [37]
  38. [38]
  39. [39]
    TCP Overhead - Cisco Learning Network
    May 31, 2012 · TCP overhead is usually said to be 40 bytes (TCP+IPv4) per packet. In IPv6 it would be at least 60 bytes (TCP+IPv6 header w/o extension headers) per packet.
  40. [40]
    TCP Retransmission Timeout (RTO): Causes & Performance
    These retransmission timeouts add up to significant problems for network and application performance and certainly require some tuning and optimization. How ...
  41. [41]
    Differences between TCP and UDP - GeeksforGeeks
    Oct 6, 2025 · Stream Type, The TCP connection is a byte stream. UDP connection is a message stream. ; Overhead, Low but higher than UDP. Very low.
  42. [42]
    TCP Over IP Bandwidth Overhead - Packet Pushers
    Sep 30, 2013 · The TCP over IP bandwidth overhead is approximately 2.8%. This equates to an 'efficiency' of 97.33% (1460/1500) – in other words, that's how much bandwidth is ...
  43. [43]
    MTU size performance impacts - IBM
    The use of large MTU sizes allows the operating system to send fewer packets of a larger size to reach the same network throughput. The larger packets greatly ...
  44. [44]
  45. [45]
    Thermal Noise Power - an overview | ScienceDirect Topics
    The thermal noise power is proportional to the temperature T (degrees Kelvin) and the receiver bandwidth Bn (hertz) and is given by: (10.10) P n = k T B ...
  46. [46]
    ASIC vs FPGA: Complete Technical Comparison Guide
    Oct 17, 2025 · ASICs typically achieve higher clock frequencies than FPGAs implementing identical functions. The custom-designed signal paths in ASICs minimize ...
  47. [47]
    Sizing the Buffer | blabs - APNIC Labs
    Dec 10, 2019 · An an informal standard for the Internet is that the buffer size should be equal to the delay bandwidth product of the link (the derivation of ...
  48. [48]
    CAT5e vs CAT6 Ethernet Cables: Complete Comparison Guide 2025
    Jul 30, 2025 · Key CAT6 Features: Maximum speed: 10,000 Mbps (10 Gigabit); Maximum distance: 100m (1 Gbps), 55m (10 Gbps); Wire gauge: 23 AWG; Bandwidth: 250 ...
  49. [49]
    Fiber Optic Cable Distance: A Comprehensive Guide
    Nov 7, 2024 · Single-mode fiber can reach 10km (6 miles) in urban areas, 40km (25 miles) in metro areas, and 80km (50 miles) or more with amplification. ...
  50. [50]
    Understanding Moore's Law: Is It Still Relevant in 2025? - Investopedia
    Explore Moore's Law and its impact on technology today. Discover if it still applies in 2025 as chip technology nears its physical limits.
  51. [51]
    The effect of contention in CSMA networks: Model and fairness ...
    Such interactions give rise to severe performance degradation and unfairness, and these effects are well-studied in the existing literature [3], [5], [6]. This ...
  52. [52]
    [PDF] Scheduling and Resource Allocation in OFDMA Wireless Systems
    Here by scheduling we refer the problem of determining which users will be active in a given time-slot; resource allocation refers to the problem of allocating ...
  53. [53]
    [PDF] Effect of Slow Fading and Adaptive Modulation on TCP/UDP ...
    Aug 25, 2006 · Markov chain model for Rayleigh fading channels are used together to derive the bulk throughput. Page 37. 25 of TCP Tahoe and Reno in steady ...
  54. [54]
    [PDF] Performance Analysis of User Speed Impact on IEEE 802.11ah ...
    This research addresses, the influence of Doppler Effect using Random Waypoint mobility model on 802.11ah with different user speed are analyzed. The design ...
  55. [55]
    Massive MIMO Explained: Unlocking 5G Efficiency - Ericsson
    Discover how Massive MIMO tackles modern connectivity challenges, ensuring reliable, low-latency, and high-throughput 5G networks anywhere.
  56. [56]
    A game theoretic analysis of jamming attack in wireless networks ...
    Jun 23, 2011 · ... jamming attack can reduce the network throughput by up to 90%. Besides traditional spread spectrum techniques at the physical layer (cf. [3] ...
  57. [57]
    [PDF] Throughput Fairness Index: An Explanation
    This index measures the "equality" of allocations. If all users get the same allocation, i.e., all xi's are equal, then the fairness index is 1, and the system ...
  58. [58]
    RFC 5166 - Metrics for the Evaluation of Congestion Control ...
    Throughput is sometimes distinguished from goodput, where throughput is the link utilization or flow rate in bytes per second; goodput, also measured in bytes ...
  59. [59]
    RFC 8238 - Data Center Benchmarking Terminology
    Definition In data center networking, a balanced network is a function of maximal throughput and minimal loss at any given time. This is captured by the ...
  60. [60]
    ns-3 | a discrete-event network simulator for internet systems
    ns-3 is a discrete-event network simulator for Internet systems, targeted primarily for research and educational use. ns-3 is free, open-source software ...Documentation · About · Wiki · ReleasesMissing: goodput calculation
  61. [61]
    [PDF] Improving TCP Goodput in 802.11 Access Networks
    Further, as can be seen in Figure 1, TCP goodput can be improved by approximately 50% if TCP ACKs are suppressed (compared to TCP receiver's policy of ...Missing: JSON parsing
  62. [62]
  63. [63]
    Goodput vs Throughput: The Differences and How They Affect Your ...
    Rating 4.9 (161) Aug 5, 2025 · Throughput measures the total amount of data flowing through your network, including overhead like headers and retransmissions. Goodput, however ...
  64. [64]
    The Essential Guide to Jitter Buffers and Their Role in Audio-Visual ...
    Mar 3, 2025 · The delay introduced by a jitter buffer is usually minimal (ranging from 20 ms to 200 ms), ensuring that users do not notice any significant lag ...
  65. [65]
    Assessing Performance of Constrained Application Protocol (CoAP ...
    Our evaluation results show that CoAP performs better than HTTP for data transmission in the dynamic MANET topologies with respect to delivery rate, delay, and ...
  66. [66]
    Ethernet's Next Bar is Now – 800 Gb/s! - IEEE Standards Association
    Apr 23, 2024 · As previously highlighted from the latest IEEE 802.3™ Ethernet bandwidth assessment, by 2025 traffic levels are forecasted to be 2.3x to 55.4x ...
  67. [67]
    Next-gen Ethernet standards set to move forward in 2025
    Jan 14, 2025 · There will be no shortage of vendors offering 800 GbE equipment in 2025, but when it comes to Ethernet standards, focus will be on 1.6 Terabits/ ...
  68. [68]
    [PDF] Understanding the IEEE 802.3bt PoE Standard - Skyworks
    The latest IEEE PoE standard,. 802.3bt, provides up to 90 W of power onto an Ethernet cable while maintaining backwards compatibility with older IEEE PoE.
  69. [69]
    Power over Ethernet (PoE) Explained: PoE Standards and Wattage
    Oct 20, 2023 · From the initial 15.4W per port to the powerful 60W and even 100W capabilities in later IEEE 802.3at and 802.3bt standards, PoE has become a ...
  70. [70]
    Cable Testing 101: Understanding Near and Far End Crosstalk
    Jun 10, 2020 · Near end crosstalk (NEXT), is a performance parameter measured within a single link/channel. It measures the signal coupled from one pair to another.
  71. [71]
  72. [72]
  73. [73]
  74. [74]
    Fiber Dispersion and Nonlinearity - IEEE Xplore
    The relevance of nonlinear effects in optical fibers is discussed. The nonlinear Schrödinger equation describing the evolution of the field amplitude envelope ...Missing: networks | Show results with:networks
  75. [75]
    Lumen Turbocharges Data Centers with Up To 400Gbps ...
    Aug 19, 2025 · Lumen provides up to 400Gbps Ethernet and IP services at 70+ data centers in 16 metro markets, with quick provisioning and usage-based pricing.
  76. [76]
    Bit Error Rate (BER) - Timbercon, Inc.
    A DS-1 signal is considered acceptable with a BER of 10-6, but an OC-3 signal requires a BER of no more than 10-12. In Telecommunication transmission, the ...
  77. [77]
    Achieving Low BER in Optical Data Links: The Role of FEC in ...
    May 28, 2025 · Typical links see improvement in bit error rates from 10-4 to 10-12 or better when FEC is employed. Evaluating BER Performance. The graph in ...
  78. [78]
    Nokia and OTE Group set dual world-record optical transmission ...
    Sep 12, 2024 · The companies also demonstrated 1.2 Tbps transmission on a single channel over 255 km. Running an optical solution with Nokia's PSE-6s allows ...
  79. [79]
    Lumen and Ciena Transmit 1.2 Tbps Wavelength Service Across ...
    Mar 29, 2025 · They have successfully demonstrated a 1.2Tbps wavelength spanning 3,050k m (more than 1,800 miles) on Lumen's Ultra-Low-Loss (ULL) fiber network ...
  80. [80]
  81. [81]
    [PDF] 3GPP Release 16 - Shifting Gears to Increase 5G Speeds on ...
    To support these goals, 3GPP targeted the following 5G performance requirements: 20 Gbps peak data rate,. 1 ms radio network latency, 10 Mbps/m2 area ...
  82. [82]
    [PDF] M2HO: Mitigating the Adverse Effects of 5G Handovers on TCP
    We confirm that handovers indeed negatively impact TCP throughput. After a handover, cwnd is reduced by 48.3 % on average, and it takes 6.7 s for TCP to ...
  83. [83]
    Log-normal shadowing meets SINR: A numerical study of Capacity ...
    Our analysis of the numerical data illustrates that log-normal shadowing creates more interference, but decreases the total amount of transmissions to be ...
  84. [84]
    Carrier Aggregation explained - 3GPP
    Dec 11, 2022 · Carrier aggregation is used in LTE-Advanced in order to increase the bandwidth, and thereby increase the bitrate.Missing: multiplier | Show results with:multiplier
  85. [85]
    Nyquist - Analog Devices
    The Nyquist principle (derived from the Nyquist-Shannon sampling theorem) states that the sampling rate must be at least twice the maximum bandwidth of the ...Missing: network throughput
  86. [86]
    Understanding Quadrature Phase Shift Keying (QPSK) Modulation
    Aug 17, 2016 · This technical brief covers the basic characteristics of a digital modulation scheme known as quadrature phase shift keying.
  87. [87]
    Quantization Noise and Amplitude Quantization Error in ADCs
    Apr 22, 2019 · In this article, we'll look at the conditions under which we are allowed to use a noise source to model the quantization error.
  88. [88]
    DOCSIS® 3.1 - CableLabs
    With this release, cable high-speed data can now reach download speeds of up to 10 Gbps (gigabits per second), enabling a wide variety of online experiences and ...
  89. [89]
    G.9701 : Fast access to subscriber terminals (G.fast) - Physical layer specification
    ### Summary of G.fast Throughput and Use in Legacy Copper for Rural or Niche Contexts
  90. [90]
    G.Fast Chipsets Business Analysis Report 2024-2030 - Business Wire
    Nov 7, 2024 · Role of G.Fast in Reducing Digital Divide in Urban and Rural Areas; Economic Analysis of G.Fast Deployment Compared to Full Fiber Solutions ...