Fact-checked by Grok 2 weeks ago

Goodput

Goodput is the application-level throughput in computer networks, defined as the number of bits per unit of time forwarded by the network to the correct destination interface, excluding any bits that are lost or require retransmission. This metric focuses solely on the useful payload data successfully delivered to the , distinguishing it from broader measures of data transmission. In networking performance evaluation, goodput provides a more accurate assessment of effective data transfer efficiency compared to throughput, which includes all transmitted bits such as protocol headers, acknowledgments, and retransmissions. For instance, in /IP benchmarking, goodput is calculated as the bytes per frame multiplied by the , adjusted for the media speed, to reflect real-world application performance under conditions like or . It is particularly valuable in scenarios involving (AQM) and control, where minimizing drops or markings via mechanisms like (ECN) can maximize goodput while balancing . Goodput is typically lower than the link's theoretical or raw throughput, highlighting overheads and inefficiencies in protocols like or . In environments, it serves as a key metric for evaluating (DUT) or (SUT) capabilities, ensuring that forwarded traffic aligns with application needs rather than just aggregate volume.

Definition and Concepts

Definition

Goodput is the application-level throughput in computer networks, defined as the number of useful information bits delivered by the to the application per unit time. This metric emphasizes the effective rate at which an application receives processable data, distinguishing it from lower-layer transmission rates. In calculating goodput, overheads—such as headers and footers—are excluded, as are any retransmitted packets resulting from errors or losses. This exclusion ensures the focus remains on the data that contributes to the application's functionality, rather than ancillary or redundant . Goodput is always less than or equal to the overall throughput, as the latter includes all transmitted bits, including those lost or retransmitted. In turn, throughput cannot exceed the maximum capacity, establishing goodput as a conservative measure of usable . Conceptually, goodput prioritizes the "useful" that the application can directly utilize, ignoring raw volume. Goodput represents the rate of useful data successfully delivered to the , distinguishing it from throughput, which measures the total volume of bits transferred over a link per unit time, encompassing overheads, headers, and any retransmitted packets. As a result, goodput is always less than or equal to throughput, since it excludes non-payload elements and erroneous transmissions that do not contribute to end-user data. This differentiation highlights goodput's focus on application-level efficiency rather than raw transmission volume. In contrast to , which denotes the theoretical maximum transfer of a physical or connection—such as 100 Mbit/s for a link—goodput reflects the actual achieved rate of useful under operational conditions, limited by factors like and protocol inefficiencies. remains constant regardless of utilization or impairments, serving as an upper bound that goodput rarely approaches in practice due to real-world deductions. Unlike latency, which quantifies the time delay for a data packet to traverse from source to destination across the network, goodput is a throughput-oriented metric emphasizing the sustained rate of payload delivery rather than temporal aspects of transmission. Latency affects the responsiveness of individual packets but does not directly measure data volume or efficiency, whereas goodput prioritizes the aggregate rate of beneficial information transfer over delay characteristics. Goodput specifically gauges the application-usable data rate by stripping away all overhead and errors, whereas effective bandwidth describes the highest reliable sustained transmission rate achievable under varying loads on a given , incorporating minimal overhead but not fully excluding protocol costs. This makes effective bandwidth a broader indicator of link performance in loaded scenarios, while goodput provides a purer measure of payload efficacy at the endpoint.

Calculation and Measurement

Formulas

The goodput of a network connection is fundamentally calculated as the of the number of useful bits successfully delivered to the divided by the total time required for the transfer. This metric focuses exclusively on the effective data rate, disregarding overheads and erroneous transmissions. \text{Goodput} = \frac{\text{useful payload bits}}{\text{total transfer time}} In packet-based networks, an extended expression refines this by accounting for packet-level details, where goodput equals the product of the size per packet (packet size minus header size), the number of successfully received packets, divided by the total time. \text{Goodput} = \frac{ (\text{packet size} - \text{header size}) \times \text{number of successful packets} }{ \text{total transmission time} } The total time in these formulas incorporates the transmission time (duration to serialize bits onto the medium), propagation delay (time for signals to travel the distance), (waiting time at buffers), and any additional delays including those from retransmissions. Goodput is conventionally expressed in bits per second (bit/s) or bytes per second (B/s), with the standard conversion factor being 1 B/s = 8 bit/s to facilitate comparability across tools and protocols. As an application-layer metric, the formula applies after upper-layer processing, thereby excluding non-payload elements such as TCP acknowledgment packets, which do not contribute to the useful data transferred.

Example Calculations

To illustrate the computation of goodput, consider practical scenarios where the basic formulas for payload efficiency and protocol-specific overheads are applied. These examples assume ideal conditions without congestion, errors, or additional delays, as detailed in the formulas section; real-world measurements often yield lower values due to variable factors. In an Ethernet network operating at 100 Mbit/s with a maximum transmission unit (MTU) of 1500 bytes, the IP and TCP headers typically consume 40 bytes, leaving 1460 bytes of application payload per frame. The full Ethernet frame size, including 18 bytes of Layer 2 overhead (14-byte header plus 4-byte frame check sequence), is 1518 bytes. The resulting goodput is therefore approximately (1460 / 1518) × 100 Mbit/s ≈ 96.31 Mbit/s. This calculation excludes interframe gaps and other physical layer overheads, which can further reduce effective throughput to around 94-95 Mbit/s in practice. For wireless local area networks (WLANs) using IEEE 802.11a at a data rate of 54 Mbit/s, goodput accounts for significant PHY and overheads, including preambles, acknowledgments (ACKs), and interframe spacings. For a 1500-byte maximum (MSDU), the MAC-layer overhead without request-to-send/clear-to-send () handshakes reaches 43% of the time due to these fixed costs relative to the shorter duration at higher rates. This yields a goodput of approximately 54 Mbit/s × (1 - 0.43) ≈ 30.8 Mbit/s; with enabled, overhead increases to 53%, dropping goodput to about 25.4 Mbit/s. Typical measured values in low-contention environments range from 30 to 40 Mbit/s, depending on exact PHY mode and channel conditions. In a TCP stream over a 10 Mbit/s link experiencing 1% random packet loss, goodput is limited by the protocol's congestion control response, which triggers retransmissions and rate reductions. Using the Mathis equation for TCP Reno, the steady-state throughput approximates \frac{MSS \times C}{RTT \sqrt{p}}, where MSS is the maximum segment size (typically 1460 bytes), C \approx 1.22 is a constant accounting for loss recovery, RTT is the round-trip time, and p = 0.01 is the loss probability. For low RTT (e.g., 1 ms) where the link capacity would otherwise be fully utilized, the loss-induced reductions—primarily from halved congestion windows upon triple duplicate ACKs and occasional timeouts—can reduce goodput by approximately 5-10% after factoring in retransmission bandwidth consumption and congestion control effects. These examples highlight the gap between raw link capacity and achievable goodput under protocol constraints, but they rely on simplifying assumptions such as error-free channels, saturated flows, and no competing traffic. Actual deployments require empirical measurement tools like or to capture variations from queuing delays, variable RTT, or bursty losses, often resulting in 10-20% lower goodput than these ideals.

Factors Affecting Goodput

Protocol Overheads

Protocol overheads in networking refer to the fixed byte costs imposed by headers and control mechanisms at various protocol layers, which do not carry application data and thus diminish goodput by reducing the fraction of bandwidth available for useful payload. In the TCP/IP stack, these overheads accumulate across layers, encapsulating the payload multiple times and consuming resources without contributing to data transfer efficiency. Goodput, defined as the rate of successful application-level data delivery, is directly impacted as these bytes must be transmitted over the same link capacity as the payload. At the transport layer, the header has a minimum size of 20 bytes, including fields for sequence numbers, acknowledgments, and flags, which enable reliable delivery but add no value to the user . The network layer contributes an IPv4 header of 20 bytes, covering addressing, fragmentation, and routing information essential for across the . At the , Ethernet frames impose an 18-byte overhead, comprising a 14-byte header (destination and source MAC addresses plus ) and a 4-byte for error detection. For a standard 1500-byte (MTU), the combined TCP and IPv4 headers alone represent about 40 bytes, or roughly 2.67% of the packet size, illustrating how these fixed costs erode payload efficiency even before link-layer encapsulation. Beyond packets, TCP control packets such as and SYN-ACK for connection establishment and for graceful closure carry no application , yet they consume and processing resources, further reducing overall goodput during setup and teardown phases. Application-layer protocols exacerbate these overheads. For instance, HTTP requests typically include headers totaling 200-500 bytes, encompassing fields like , User-Agent, and Accept for request semantics and , which must be sent with each transaction. When layered with security protocols, TLS introduces additional overhead through record headers (5 bytes) and to align with requirements, often adding 1-16 bytes per record depending on the cipher mode and length. This ensures proper but inflates the transmitted size without conveying user content. The layered nature of the OSI/TCP/IP model compounds these effects, as each protocol adds its headers independently, leading to total overheads of 10-20% in typical scenarios where small payloads and frequent connections amplify the relative cost. To mitigate this, techniques such as Robust Header Compression (ROHC) exploit redundancies in , , and RTP headers to shrink them from 40 bytes to as little as 1-4 bytes, particularly beneficial in bandwidth-constrained environments like cellular networks. Additionally, using larger MTUs, such as jumbo frames up to 9000 bytes, improves the payload-to-overhead ratio by spreading fixed header costs over more data bytes, enhancing goodput in high-speed LANs without fragmentation risks.

Network and Error Conditions

In TCP-based networks, packet loss triggers retransmissions and invokes control mechanisms, significantly degrading goodput. When a packet is lost, TCP's during recovery phases, such as fast retransmit or timeout, halts new data transmission until acknowledgment, effectively reducing the achievable rate. For instance, a 1% packet loss rate can reduce TCP goodput by approximately 70-90% compared to loss-free conditions, depending on round-trip time (RTT) and segment size, as the sender interprets loss as and halves the . This impact is modeled by the Mathis equation for TCP throughput under random loss: BW = \frac{MSS \cdot C}{RTT \sqrt{p}} where BW is the steady-state bandwidth (goodput approximation), MSS is the maximum segment size, C is a constant (typically around 1.22 for delayed ACKs), RTT is the round-trip time, and p is the loss probability; the inverse square-root dependence on p illustrates how even low loss rates exponentially curb performance. Congestion control algorithms further limit goodput in high-load environments by dynamically adjusting the sending rate to prevent network overload. TCP's slow-start phase exponentially increases the congestion window until a threshold or loss event, after which congestion avoidance linearly ramps it up, often capping goodput well below available bandwidth during bursts or shared links. In scenarios with multiple flows, this results in fair sharing but reduced aggregate goodput, as each flow probes capacity conservatively to avoid collapse; for example, under sustained congestion, goodput may stabilize at 40-60% of link capacity due to these phased adjustments. Propagation and queuing delays elevate the effective RTT in the goodput denominator, prolonging the time to transfer useful data and amplifying loss sensitivity per the Mathis model. Propagation delay, inherent to long-distance paths like wide-area networks, adds fixed latency based on signal speed over distance, while queuing delay varies with buffer occupancy and traffic load, often exacerbating RTT during peaks and reducing goodput by 20-50% on paths exceeding 100 ms RTT. In bufferbloat scenarios, excessive queuing can inflate delays to seconds, severely limiting TCP's window growth and goodput. Bit errors in networks, quantified by (BER), corrupt packets and induce drops or retransmissions, directly lowering goodput independent of . Typical BERs (e.g., 10^{-5} to 10^{-3}) translate to rates of 1-10% for 1000-byte payloads, as even a single bit flip triggers discards, reducing goodput by up to 50% in fading channels without . , the variation in packet arrival times, compounds this for real-time applications by causing or timeouts, leading to application-level discards and effective goodput drops of 30-70% in VoIP or streaming where timely reconstruction is critical. Environmental factors such as wireless interference and routing inefficiencies introduce variable losses and delays that further diminish goodput. Interference from co-channel signals in dense 802.11 deployments increases collision rates, elevating effective loss to 5-20% and halving goodput in multi-hop scenarios via the Mathis model. Routing inefficiencies, like suboptimal path selection in ad-hoc or dynamic topologies, prolong RTT and accumulate errors across hops, reducing end-to-end goodput by 40-60% compared to direct links, as quantified in interference-aware models.

Applications and Importance

In Network Protocols

In TCP, goodput is primarily determined by the congestion window (cwnd) size and the round-trip time (RTT), with the effective sending approximated as cwnd divided by RTT, limiting the amount of unacknowledged in flight to prevent network overload. This mechanism ensures adaptive flow control, but it can reduce goodput during events as cwnd shrinks in response to packet losses inferred from timeouts or duplicate acknowledgments. Optimizations such as selective acknowledgments () enhance TCP goodput by allowing receivers to report non-contiguous blocks of acknowledged , enabling senders to retransmit only truly lost segments and avoid unnecessary retransmissions of already-received packets. For instance, combined with fast recovery algorithms minimizes the impact of multiple losses within a single window, improving overall efficiency in varied network conditions. UDP-based systems offer higher goodput potential compared to TCP due to the lack of built-in reliability mechanisms, such as acknowledgments and retransmissions, which eliminates associated protocol overhead and reduces latency. However, this comes at the cost of vulnerability to packet losses, as UDP provides no recovery, making it prone to data corruption or gaps in lossy environments. In streaming applications, where timeliness is prioritized over complete delivery and minor losses are tolerable without significant user impact, UDP maximizes goodput by focusing on rapid, continuous data delivery. Protocols like RTP over UDP exemplify this, supporting real-time audio and video transport with minimal header overhead to sustain high effective throughput for time-sensitive payloads. For HTTP and , goodput is significantly influenced by connection establishment and management overheads; HTTP/1.1 introduced persistent connections, allowing multiple requests and responses over a single session to amortize the three-way and slow-start costs, thereby increasing overall goodput for sequential resource fetches. builds on this with and server-push capabilities, enabling pipelined transmission of multiple streams within one connection without , which further reduces and boosts goodput, especially for pages with numerous small objects. These advancements minimize idle connection time and overhead, leading to more efficient application-level data transfer in web browsing scenarios. The protocol addresses limitations in traditional by multiplexing transport-layer functionality directly into the over , reducing through integrated TLS 1.3 and 0-RTT resumption, which collectively lowers overhead and enhances goodput. In lossy networks, QUIC's independent stream handling prevents a single from blocking all streams, unlike TCP's byte-stream model, resulting in outperforming in more than 90% of cases for page load times on lossy networks such as . QUIC was standardized in May 2021 (RFC 9000) and serves as the transport protocol for , which has seen widespread adoption by 2025. This design makes QUIC particularly effective for mobile and variable-quality links, where it maintains robust goodput by decoupling congestion control from loss recovery. The concept of goodput emerged in 1990s networking literature to differentiate application-useful throughput from raw metrics in analyses, first notably appearing in evaluations of enhancements over links to quantify effective data delivery after accounting for errors and retransmissions.

Performance Evaluation Tools

Several software tools facilitate the measurement of goodput in network environments by generating controlled traffic and analyzing payload delivery rates. , an open-source utility, measures and goodput through bidirectional streams, reporting the effective data transfer rate excluding overheads after a configurable warm-up period. Similarly, Netperf provides application-level benchmarks for and streams, allowing users to assess goodput under various socket configurations and buffer sizes to simulate real-world application performance. These tools are widely adopted for their ability to isolate goodput from total throughput by focusing on application data bytes successfully received over time. Network analyzers like enable post-capture goodput calculation by dissecting packet payloads and excluding headers, retransmissions, and control traffic from the analysis. Through its TCP Stream Graph feature, Wireshark visualizes goodput alongside throughput, permitting users to filter for application-layer data rates in captured traces from live or offline interfaces. This approach is particularly useful for diagnosing goodput degradation in complex scenarios involving multiple protocols. Simulation environments such as ns-3 and OMNeT++ support modeling goodput under controlled variables like topology, mobility, and interference. In ns-3, the FlowMonitor module computes goodput as the ratio of application-layer bytes received to simulation time, enabling evaluations in wireless or wired network simulations. OMNeT++, often extended with the INET framework, models goodput by tracking successful packet deliveries at the application layer, facilitating scenario-based analysis for protocols like TCP in multi-hop networks. Goodput measurements are frequently integrated with application-specific quality metrics to assess end-user experience. For VoIP systems, goodput is correlated with (MOS), where sustained payload rates above codec requirements (e.g., 64 kbps for ) maintain MOS scores above 4.0 on a 1-5 scale. In video streaming, goodput combines with (PSNR) to evaluate playback quality, ensuring that effective bitrate delivery minimizes artifacts in adaptive streaming protocols like . Best practices for goodput evaluation include conducting multiple trials to average out variability, excluding initial warm-up intervals to avoid buffer inflation effects, and accounting for operating system buffer tuning to reflect realistic conditions. Tools like and Netperf align with standardized methodologies, such as those outlined in RFC 2544 for Ethernet performance testing, which can be adapted to isolate goodput by focusing on frame payloads rather than raw .

References

  1. [1]
    RFC 2647: Benchmarking Terminology for Firewall Performance
    ### Summary of Goodput Definition from RFC 2647
  2. [2]
    RFC 8238 - Data Center Benchmarking Terminology
    Goodput is the application-level throughput. For standard TCP applications, a very small loss can have a dramatic effect on application throughput. [RFC2647] ...
  3. [3]
    Characterization Guidelines for Active Queue Management (AQM)
    2.5. Goodput The goodput has been defined as the number of bits per the unit of time forwarded to the correct destination interface, minus any bits lost or ...
  4. [4]
    RFC 8238: Data Center Benchmarking Terminology
    ### Definition of Goodput and Relation to Application-Level Throughput
  5. [5]
    A comparison of mechanisms for improving TCP performance over ...
    For any path (or link), goodput is defined as the ratio of the actual transfer size to the total number of bytes transmitted over that path. In general, the ...
  6. [6]
    RFC 2647 - Benchmarking Terminology for Firewall Performance
    Unit of measurement: not applicable Issues: See also: DMZ tri-homed user 3.17 Goodput Definition: The number of bits per unit of time forwarded to the correct ...
  7. [7]
    Bandwidth, Throughput, and Goodput > Latency, delay ... - Cisco Press
    Aug 5, 2024 · Goodput will always be less than throughput because every data packet contains fields of overhead. For example, Ethernet has a 20-byte header ...
  8. [8]
    What is Network Latency? - Amazon AWS
    Network latency is the delay in network communication. It shows the time that data takes to transfer across the network.Which applications require low... · What are the causes of...
  9. [9]
    What is network bandwidth and how is it measured? - TechTarget
    Jul 16, 2025 · Effective bandwidth -- the highest reliable transmission rate a link can provide on any given transport technology -- is measured using a ...
  10. [10]
    Goodput – Knowledge and References - Taylor & Francis
    Goodput refers to the amount of data packets that are successfully transmitted per unit of time at the application layer of a network.
  11. [11]
    [PDF] Experimental analysis of the impact of RTT differences on the ...
    ... transmission time. Adding delay in the amount of ... In computer networks analysis, goodput is the figure that represents the amount of useful bits (or.
  12. [12]
    Goodput vs Throughput: Network Performance Metrics That Matter in ...
    The fundamental difference between goodput and throughput lies in what data each metric includes in its calculations. Throughput encompasses all transmitted ...Missing: RFC | Show results with:RFC
  13. [13]
    Understanding Network and Internet Latency | Experts Exchange
    May 16, 2010 · 8127.4 * 1460 * 8 = 94.9Mbps would be your theoretical throughput for 1500 byte frames 2.4 percent of overhead at layer 2 for a 1500 byte frame
  14. [14]
    What is the theoretical maximum throughput of a Gigabit Ethernet ...
    Jul 29, 2022 · How is maximum throughput calculated for standard MTU (1518 byte) frames? · Rate / Frame size = frames per second · 1,000,000,000 / (1518 + 8 + 8 ...Missing: goodput 100
  15. [15]
    [PDF] Fast Resilient Jumbo Frames in Wireless LANs
    For example, consider a transmission of 1500-byte UDP packet in IEEE. 802.11a. The MAC overhead under no RTS/CTS is 12% for 6 Mbps and increases to 43% for 54 ...Missing: Wi- Fi
  16. [16]
    [PDF] Estimating Packet Loss Rate in the Access Through Application ...
    The model is derived from the well-known Mathis equation, which predicts the bandwidth of a steady-state TCP connection under random losses and delayed ACKs ...
  17. [17]
    RFC 5166 - Metrics for the Evaluation of Congestion Control ...
    The goodput ratio: For wireless networks, the goodput ratio can be a useful metric, where the goodput ratio can be defined as the useful data delivered to ...
  18. [18]
    RFC 793 - Transmission Control Protocol - IETF Datatracker
    This document describes the DoD Standard Transmission Control Protocol (TCP). There have been nine earlier editions of the ARPA TCP specification on which this ...
  19. [19]
    RFC 791 - Internet Protocol - IETF Datatracker
    IHL The internet header field Internet Header Length is the length of the internet header measured in 32 bit words. IMP The Interface Message Processor, the ...
  20. [20]
    RFC 894 - A Standard for the Transmission of IP Datagrams over ...
    Frame Format IP datagrams are transmitted in standard Ethernet frames. The type field of the Ethernet frame must contain the value hexadecimal 0800. The ...
  21. [21]
    RFC 5246: The Transport Layer Security (TLS) Protocol Version 1.2
    RFC 5246 TLS August 2008 bulk encryption ... Since the ciphers might incorporate padding, the amount of overhead could vary with different TLSCompressed.length ...<|separator|>
  22. [22]
    MTU size performance impacts - IBM
    The MTU size of the network can have a large impact on performance. The use of large MTU sizes allows the operating system to send fewer packets of a larger ...
  23. [23]
    [PDF] The Macroscopic Behavior of the TCP Congestion Avoidance ...
    In this paper, we analyze a performance model for the. TCP Congestion Avoidance algorithm. The model pre- dicts the bandwidth of a sustained TCP connection ...
  24. [24]
    Queueing Delay - an overview | ScienceDirect Topics
    Persistent queues, known as bufferbloat, create considerable delays without improving throughput and are interpreted as congestion by traditional congestion ...
  25. [25]
    [PDF] Improving TCP performance in integrated wireless communications ...
    Aug 6, 2004 · In wired networks, the random bit error rate (BER) is neg- ligible, and congestion is the main cause of packet losses. The TCP sender ...
  26. [26]
    Coupled IEEE 802.11ac and TCP Goodput Improvement Using ...
    Discover a new model for TCP traffic transmission over IEEE 802.11ac, enhancing TCP Goodput by up to 60% with Reverse Direction and blind retransmission.Missing: formula | Show results with:formula<|control11|><|separator|>
  27. [27]
    [PDF] Goodput in Wireless Backhaul Networks Using IEEE 802.11
    Two widely used models for goodput in ideal CSMA scenarios networks are disc-graph model and the signal- to-interference-and-noise (SINR) model Ref. [23]. Both ...Missing: inefficiencies | Show results with:inefficiencies
  28. [28]
    [PDF] Goodput and throughput comparison of single-hop and multi-hop ...
    Apr 27, 2015 · We investigate how multi-hop routing affects the goodput and throughput performances of IEEE 802.11 distributed coordi-.
  29. [29]
    RFC 5681 - TCP Congestion Control - IETF Datatracker
    This document defines TCP's four intertwined congestion control algorithms: slow start, congestion avoidance, fast retransmit, and fast recovery.Missing: goodput | Show results with:goodput
  30. [30]
    RFC 2018 - TCP Selective Acknowledgment Options
    A Selective Acknowledgment (SACK) mechanism, combined with a selective repeat retransmission policy, can help to overcome these limitations.Missing: goodput | Show results with:goodput
  31. [31]
    [PDF] TCP Fast Recovery Strategies: Analysis and Improvements
    TCP's fast recovery algorithms quickly recover packet losses without retransmission timeouts (RTOs). Reno reduces the congestion window by half for each ...
  32. [32]
    RFC 3550 - RTP: A Transport Protocol for Real-Time Applications
    RTP provides end-to-end network transport functions suitable for applications transmitting real-time data, such as audio, video or simulation data.Missing: goodput | Show results with:goodput
  33. [33]
    [PDF] Does QUIC make the Web faster ? - Department of Computer Science
    We find QUIC to perform better overall under poor network conditions (low bandwidth, high latency and high loss), for e.g. more than 90% of synthetic pages ...
  34. [34]
  35. [35]
    iPerf - The TCP, UDP and SCTP network bandwidth measurement tool
    iPerf3 is a tool for active measurements of the maximum achievable bandwidth on IP networks. It supports tuning of various parameters related to timing, buffers ...iPerf · Public iPerf3 servers · iPerf3 and iPerf2 user... · Contact
  36. [36]
  37. [37]
    [PDF] Measuring the Quality of Experience of HTTP Video Streaming
    TCP goodput is less than the video's bitrate. ... Real-time estimation of user-level QoS in audio-video IP transmission by using temporal and spatial quality.