Fenwick tree
A Fenwick tree, also known as a binary indexed tree (BIT), is a compact data structure that maintains an array of values while supporting efficient point updates and prefix sum queries, both in O(log n) time complexity where n is the array size.[1] It achieves this by representing the array in a single contiguous structure of size n, leveraging bitwise operations on indices—such as counting trailing zeros or least significant bit manipulation—to traverse and aggregate values across a logical tree hierarchy without explicit tree nodes.[2]
Introduced by Peter M. Fenwick in 1994, the structure was originally designed to handle cumulative frequency tables for adaptive arithmetic data compression, where it provides faster access and more compact storage than prior methods like move-to-front or heap-based reorganizations, with query and update times proportional to the number of 1-bits in the binary index (typically O(log n)).[1] In this context, the tree decomposes frequencies into subfrequencies based on binary representations, enabling dynamic maintenance of sums for large symbol alphabets, such as 256 symbols in binary files or 96 in text processing.[1]
Beyond its origins in compression, Fenwick trees have become a staple in algorithmic applications requiring dynamic range queries, including competitive programming problems for sum or frequency aggregation, parallel clustering algorithms like density peaks, and augmented data structures for order statistics.[3][4] Their simplicity, low constant factors, and space efficiency—using exactly n elements without overhead—make them preferable to alternatives like segment trees in scenarios where only prefix sums (or easily derivable range sums) are needed.[5]
Introduction
Definition and Purpose
A Fenwick tree, also known as a binary indexed tree (BIT), is an array-based data structure that maintains cumulative sums (prefix sums) of an array to support efficient point updates and range queries. It stores the array elements in a way that each position holds the sum of a specific range of elements, determined by the binary representation of the indices, allowing the structure to represent the underlying array compactly without explicit tree nodes. This design was originally introduced to handle cumulative frequency tables in dynamic arithmetic data compression algorithms.[1]
The primary purpose of a Fenwick tree is to enable both updating an individual array element and querying the prefix sum up to a given index in O(log n) time, where n is the size of the array, making it suitable for applications involving frequent modifications and sum computations on large datasets. Unlike static prefix sum arrays, which require O(n) time to rebuild after any update, the Fenwick tree dynamically adjusts the cumulative sums during updates, balancing efficiency for both operations. It briefly references binary indexing to navigate and aggregate these sums, though the full mechanics are rooted in the structure's array layout.[6]
This data structure assumes basic familiarity with arrays and the concept of prefix sums, where the naive approach computes sums by iterating from the start of the array each time, resulting in O(n) time per query or update, which becomes inefficient for large-scale or dynamic data scenarios such as those in computational problems requiring thousands of operations.[7]
Motivation
The Fenwick tree addresses key limitations of static prefix sum arrays in dynamic environments, where elements are frequently updated after initial setup. Computing prefix sums on a static array is efficient at O(1) per query after O(n) preprocessing, but any update requires rebuilding the entire array in O(n) time, which becomes impractical for large-scale data with ongoing modifications, such as in online algorithms or real-time processing systems. The Fenwick tree overcomes this by enabling both point updates and prefix sum queries in O(log n) time, using a compact array-based structure that leverages binary indexing for efficient traversal, thus supporting associative operations like summation without full recomputation.
The Fenwick tree was developed specifically to maintain cumulative frequency tables for dynamic arithmetic data compression.[1] Prior methods, such as move-to-front lists or heap-based reorganizations, suffered from slower update times or larger memory footprints for large alphabets; the Fenwick tree's design provides logarithmic (O(log n)) access times with simpler implementation and more compact storage.[1]
Beyond its origins, the Fenwick tree finds essential use in computing partial sums for time series data, including financial tracking where transaction volumes or prices change dynamically, allowing quick aggregation of balances or returns without linear rescans. It also supports efficient counting of array inversions in modified merge sort algorithms by maintaining dynamic frequency counts of processed elements to tally pairs out of order in O(n log n) total time.[8] Additionally, it enables frequency counting in data streams or dynamic multisets, where elements arrive or are modified online, facilitating real-time queries on cumulative distributions such as in topic modeling or histogram maintenance.[9]
Historical Background
Invention by Peter Fenwick
The Fenwick tree, also known as the binary indexed tree, was developed by Peter M. Fenwick, a computer scientist in the Department of Computer Science at the University of Auckland, New Zealand.[1] Fenwick created the structure in 1994 to address the need for an efficient method to maintain and update cumulative frequency tables in the context of dynamic arithmetic data compression algorithms.[1] Prior approaches, such as move-to-front lists or heap-based reorganizations, incurred high overhead for symbol frequency adjustments, particularly in adaptive coding scenarios where probabilities evolve with incoming data; Fenwick's design leveraged binary decomposition of indices to enable logarithmic-time operations without such reorganizations.[1]
The invention stemmed from challenges in arithmetic coding, where maintaining cumulative probabilities for symbol selection is critical for encoding and decoding efficiency. Fenwick's binary indexed tree uses a single array to store partial sums, allowing range queries and point updates through a tree-like traversal based on the binary representation of array indices.[1] This approach proved particularly advantageous for large alphabets, as its space usage and access times scale logarithmically with the table size, outperforming earlier methods in both compactness and speed.[1]
Fenwick first described the structure in his seminal paper titled "A new data structure for cumulative frequency tables," published in the journal Software: Practice and Experience (Volume 24, Issue 3, pages 327–336).[1] In this work, he detailed the tree's construction, update, and search procedures, demonstrating its applicability to compression tasks while emphasizing the simplicity of its implementation. A correction to the paper was published later in 1994.[1][10] The paper highlighted how the binary indexing inspiration—decomposing indices into power-of-two components—facilitates intuitive and efficient navigation.[1]
Evolution and Naming
Following its initial proposal for efficient cumulative frequency computations in data compression applications, the Fenwick tree rapidly gained traction in competitive programming and algorithms research during the late 1990s and early 2000s, driven by its simplicity and performance advantages over alternatives like segment trees.[11] This adoption was facilitated by early open-source implementations and its inclusion in educational materials for algorithm design, where it became a staple for handling dynamic prefix sum queries and updates.[12]
The structure is known by two primary names: "Fenwick tree," which honors its inventor Peter M. Fenwick, and "binary indexed tree" (BIT), a term that highlights the underlying binary representation and indexing mechanism using bitwise operations.[2] The BIT nomenclature is particularly prevalent in non-English-language technical literature and programming communities, often arising from direct translations of the original description emphasizing the binary structure, while "Fenwick tree" predominates in English sources to credit the originator.[12]
As of 2025, the Fenwick tree has achieved widespread recognition, with integrations into common C++ templates for competitive coding and references in standard algorithm libraries for educational platforms. It has been widely cited in academic papers and textbooks on data structures, underscoring its enduring utility, though the core structure has seen no major updates since its inception, remaining focused on O(log n) operations for updates and queries.
Underlying Principles
Prefix Sum Computations
A prefix sum array S for an input array A = [A{{grok:render&&&type=render_inline_citation&&&citation_id=1&&&citation_type=wikipedia}}, A{{grok:render&&&type=render_inline_citation&&&citation_id=2&&&citation_type=wikipedia}}, \dots, A] is defined such that S = \sum_{j=1}^{i} A for each i = 1 to n, with S{{grok:render&&&type=render_inline_citation&&&citation_id=0&&&citation_type=wikipedia}} = 0. This construction enables efficient range sum queries, where the sum of elements from index l to r (inclusive) is given by S - S[l-1] in constant time, transforming cumulative computations into simple subtractions.[13][14]
In the static case, where the array A remains unchanged after initial setup, the prefix sum array can be computed in linear time O(n) via the recurrence relation
S = S[i-1] + A
for i = 1 to n. Queries then proceed in O(1) time each. However, any update to an element A necessitates rebuilding the entire prefix sum array to maintain correctness, imposing an O(n) cost per update.[14][15]
Dynamic scenarios, involving a mix of m updates and q prefix sum queries on an array of size n, expose the limitations of static approaches. A naive strategy—recomputing the prefix sum from scratch after each update and performing linear scans for each query—yields a total time complexity of O((m + q)n). This is inefficient for large-scale problems, such as those with n up to $10^6 and m, q up to $10^5, resulting in up to $10^{11} operations that exceed practical computational limits. Such challenges motivate the development of specialized data structures capable of handling both updates and queries in O(\log n) time, often by exploiting binary index representations.[16][17]
Binary Representation and Indexing
The binary representation of an index plays a central role in the Fenwick tree's efficiency, as it dictates which nodes are responsible for storing and aggregating prefix sums over specific ranges. For an index i, its binary form identifies the positions of set bits that correspond to the tree's hierarchical structure, where each node at index i is responsible for a contiguous range of elements whose length is a power of two, determined by the trailing zeros in i's binary representation. This alignment ensures that updates and queries traverse only \log n levels, leveraging the inherent properties of binary numbers to isolate subtrees without explicit tree pointers.[18]
The least significant bit (LSB) of i, which is the rightmost set bit in its binary form, is crucial for isolating the subtree rooted at that node and for navigating to parent or sibling nodes. Computationally, the LSB can be extracted using the bitwise operation i \& -i, which yields the value $2^k where k is the position of the LSB; this isolates the power-of-two range covered by the node at i, starting from i - (i \& -i) + 1 to i. In this way, the binary structure ensures that each node aggregates sums for exactly $2^k consecutive elements, aligned to the binary boundaries, providing a compact encoding of the prefix sum hierarchy.[2]
For the update operation at index i, the process begins by modifying the node at i and then propagates upward by repeatedly adding the LSB to the current index: set the value at i, then i \leftarrow i + (i \& -i), continuing until i exceeds the array size; this targets the covering ranges in the tree. Similarly, for a prefix sum query up to i, the sum is accumulated by adding the value at i and then moving to the previous responsible node via i \leftarrow i - (i \& -i), repeating while i > 0; subtracting the LSB effectively covers the disjoint power-of-two ranges that compose the prefix. These operations exploit the binary indexing to achieve O(\log n) time without storing explicit parent links.[2][18]
Structure and Representation
Array-Based Storage
The Fenwick tree, also known as a binary indexed tree, employs a compact array-based representation to implicitly encode the hierarchical structure required for efficient prefix sum operations. This implementation utilizes a one-dimensional array FT[1 \dots n], where n corresponds to the length of the underlying data array, and the zeroth index remains unused to simplify binary indexing conventions.[17][12]
Each element FT stores the cumulative sum of a contiguous range in the original array, specifically covering the elements from position i - (i \land -i) + 1 to i, where i \land -i (bitwise AND with the two's complement negation) identifies the least significant set bit in the binary form of i. This storage pattern decomposes the prefix sums into portions aligned with the binary representation of the indices, enabling the tree's implicit connectivity without additional metadata.[17][12]
The array's 1-based indexing aligns directly with binary operations, such as bit manipulations for range determination, avoiding off-by-one errors common in zero-based systems and ensuring seamless integration with the tree's indexing logic.[12]
In terms of memory efficiency, this representation achieves O(n) space complexity, matching the original array's footprint, as it relies solely on the array without explicit pointers, nodes, or auxiliary structures for linking.[17][12]
Initialization proceeds by allocating the array of size n+1 and setting all entries to zero, after which the tree is populated incrementally through successive updates for each original array element, rather than via a bulk preprocessing of all prefix sums. This zero-initialized state supports the dynamic updates central to the structure's purpose.[12]
Tree-Like Interpretation
The Fenwick tree can be conceptualized as a hierarchical structure of binary trees that, for general array sizes, forms a forest covering the entire array through nested subtrees aligned to powers of two. Each index i (using 1-based indexing) acts as the root of a binary subtree responsible for a contiguous block of $2^k elements, with k denoting the number of trailing zeros in the binary representation of i. This arrangement ensures complete coverage of the array via these nested structures.[12]
The coverage for the node at index i spans the range from i - (i \& -i) + 1 to i, where i \& -i computes the value of the least significant set bit (LSB) of i, equivalent to $2^k. Subranges under this root—its children—are defined by adding the LSB value to the starts of smaller power-of-two blocks within the parent's range, forming binary branches that aggregate sums hierarchically. This setup underpins the data structure's design for dynamic prefix sums, as originally proposed by Fenwick.[19][12]
To visualize this, consider an array of size n=8. Index 4 (binary 100, with k=2, covering $2^2=4 elements) roots a subtree over positions 1 to 4. Its subtrees include index 2 (binary 10, k=1, covering 1 to 2) and index 3 (binary 11, k=0, covering 3 to 3), with index 1 (binary 1, k=0, covering 1 to 1) as a leaf under 2; updates at a leaf propagate upward to parents like 2 and then 4. Index 8 (binary 1000, with k=3, covering $2^3=8 elements) roots the overall tree over positions 1 to 8, incorporating the subtree at 4 along with additional substructures at 6 (covering 5 to 6) and 7 (covering 7 to 7), with index 5 under 6; while queries accumulate from relevant nodes in this hierarchy.[12]
This hierarchical interpretation offers an intuitive lens for grasping the Fenwick tree's mechanics, highlighting how the nested ranges ensure no overlaps or gaps in array coverage, which enables efficient logarithmic navigation for operations without requiring explicit pointers in the underlying array storage.[12]
Core Operations
Initialization
The initialization of a Fenwick tree begins with allocating an array of size n+1 (where n is the size of the input array A), initialized to zero, to store the tree structure; the extra element at index 0 is left unused to facilitate 1-based indexing.[20] For an input array A with elements indexed from 1 to n, the tree is built incrementally by iterating over each index i from 1 to n and invoking the update operation to add A at position i, which propagates the value to the responsible cumulative sum positions in the tree.[12]
This incremental building process is the conventional approach, as it leverages the existing update mechanism to correctly establish the prefix sums without requiring explicit computation of each tree node's responsibility, which would involve resolving complex range dependencies across the binary-indexed structure.[21] While a direct construction method exists that achieves O(n) time by sequentially adding values and propagating to higher ranges, the standard incremental method remains prevalent due to its simplicity and the resulting O(n \log n) total time complexity for initialization.[22]
Edge cases include handling an empty input array (n=0), where the Fenwick tree is typically represented as an array of size 1 initialized to zero, preventing any operations from accessing invalid indices.[12] Implementations must also account for indexing conventions: if the input array uses 0-based indexing, the build loop adjusts by starting from index 1 in the tree and mapping A{{grok:render&&&type=render_inline_citation&&&citation_id=0&&&citation_type=wikipedia}} to tree position 1, ensuring compatibility across languages and libraries.
Update Operation
The update operation in a Fenwick tree modifies the value at a specific index by adding a delta to it and propagating the change to maintain accurate prefix sums across affected ranges. This process, originally described by Fenwick, ensures efficient adjustments to cumulative frequencies in dynamic scenarios, such as arithmetic coding applications.[1]
The steps to add a delta to index i (with $1 \leq i \leq n) are as follows: initialize the current position to i; add delta to the Fenwick tree array FT at the current position; compute the least significant bit (LSB) of the current position using the bitwise operation current & -current; add this LSB value to the current position; repeat the addition and position update until the current position exceeds n. This can be implemented in pseudocode as:
function update(i, delta):
current = i
while current <= n:
FT[current] += delta
lsb = current & -current
current += lsb
function update(i, delta):
current = i
while current <= n:
FT[current] += delta
lsb = current & -current
current += lsb
Each iteration updates the node responsible for a power-of-two range ending at the current index and advances to the covering parent node for larger encompassing ranges, leveraging the binary structure to propagate the change upward.[1]
The correctness of this operation arises because the sequence of updated positions decomposes i into a sum of disjoint power-of-two intervals—specifically, i = r_1 + r_2 + \dots + r_k where each r_j = 2^{b_j} for distinct bits b_j set in i's binary representation—ensuring that every prefix sum including i is incremented by delta while unaffected prefixes remain unchanged. The LSB operation isolates the rightmost set bit, which defines the exact range span for each node without overlap or omission.[1]
The operation accommodates negative deltas for subtractions and requires i to be between 1 and n inclusive to stay within the array bounds.[1]
Prefix Sum Query
The prefix sum query in a Fenwick tree computes the cumulative sum of array elements from index 1 to a given index i, leveraging the tree's structure to achieve this in O(\log n) time. This operation is central to the data structure's efficiency for dynamic prefix computations, as introduced by Fenwick for maintaining cumulative frequencies in arithmetic coding applications.
The algorithm begins by initializing a sum variable to 0. It then iteratively adds the value stored at the current index in the Fenwick tree array (denoted FT) to the sum, while updating the index by subtracting its lowest set bit, represented as i \& -i in binary terms. This process continues until the index reaches 0. The pseudocode for this operation is as follows:
function prefix_sum_query(i):
sum = 0
while i > 0:
sum += FT[i]
i -= (i & -i)
return sum
function prefix_sum_query(i):
sum = 0
while i > 0:
sum += FT[i]
i -= (i & -i)
return sum
This mechanism works by decomposing the range [1, i] into disjoint subranges, each corresponding to a power-of-two length that is exactly covered by the value at a particular FT index. For instance, the lowest set bit of i identifies the largest such subrange ending at i, and subtracting it shifts to the next uncovered portion, ensuring no overlaps or gaps in coverage.[3]
The correctness of this approach stems from the binary representation of indices in the Fenwick tree, where each FT stores the sum of a contiguous range of original array elements determined by the least significant 1-bit in j's binary form. The iterative subtraction effectively traverses the binary digits of i from least to most significant, summing exactly the contributions that partition [1, i] into these disjoint ranges, thus reconstructing the total prefix sum without redundancy or omission. Mathematically, the query can be expressed as:
\text{Query}(i) = \sum_{\substack{j=1 \\ j \leq i}}^i \text{FT}[j'] \quad \text{where } j' \text{ are the indices obtained by successively subtracting the lowest set bit from } i.
[3]
To obtain a range sum from l to r, the prefix sum query supports subtraction: sum(l to r) = Query(r) - Query(l-1), providing an efficient way to handle arbitrary intervals once prefix sums are available.
Implementation Details
Pseudocode Examples
The core operations of the Fenwick tree can be implemented using simple, efficient algorithms that leverage bitwise operations for navigating the tree structure. The following language-agnostic pseudocode outlines the initialization, update, and prefix sum query functions, as derived from the original algorithmic description by Fenwick. These implementations assume 1-based indexing for the array and Fenwick tree storage, with the tree represented as an array FT of size n+1 where FT[0] is unused.
Initialization
To build the Fenwick tree from an initial array A of size n, create an array FT initialized to zeros and sequentially update each position with the corresponding value from A. This ensures the tree correctly accumulates the prefix sums.
function build(A, n):
FT = [array](/page/Array) of size n+1 initialized to 0
for i = 1 to n:
update(FT, n, i, A[i])
return FT
function build(A, n):
FT = [array](/page/Array) of size n+1 initialized to 0
for i = 1 to n:
update(FT, n, i, A[i])
return FT
Update Operation
The update function adds a delta value to the element at index i (1-based) and propagates the change to the affected tree nodes by repeatedly adding the delta to positions determined by the lowest set bit in the index.
[function](/page/Function) [update](/page/Update)(FT, n, i, [delta](/page/Delta)):
while i <= n:
FT[i] += [delta](/page/Delta)
i += i & -i
[function](/page/Function) [update](/page/Update)(FT, n, i, [delta](/page/Delta)):
while i <= n:
FT[i] += [delta](/page/Delta)
i += i & -i
Prefix Sum Query
The query function computes the prefix sum from index 1 to i (inclusive) by accumulating values from the tree nodes responsible for the range, subtracting the index's lowest set bit to traverse towards the root.
function query(FT, i):
sum = 0
while i > 0:
sum += FT[i]
i -= [i & -i](/page/I,_I)
return sum
function query(FT, i):
sum = 0
while i > 0:
sum += FT[i]
i -= [i & -i](/page/I,_I)
return sum
These pseudocode snippets utilize bitwise operations such as [i & -i](/page/I,_I) to isolate the least significant set bit (lowbit), enabling O(log n) time complexity for both updates and queries in languages supporting integer bitwise arithmetic. The 1-based indexing aligns with the tree's binary representation, where index 0 is reserved to simplify boundary conditions.
Walkthrough Example
Consider an example Fenwick tree for an array A of size n=8 with 1-based indexing, where A = [3, 1, 4, 2, 0, 0, 0, 0] (padding with zeros beyond the initial values for illustration). The Fenwick tree array FT is initialized to zeros with length n+1. The tree is constructed by applying the update operation sequentially for each non-zero element in A, following the standard update procedure where, for an index i and value v, FT += v and then i is incremented by its least significant bit (LSB, computed as i \& -i) until i > n [23].
Begin with all FT = [0, 0, 0, 0, 0, 0, 0, 0, 0].
-
Update index 1 with value 3: Start at i=1, FT{{grok:render&&&type=render_inline_citation&&&citation_id=1&&&citation_type=wikipedia}} += 3 so FT{{grok:render&&&type=render_inline_citation&&&citation_id=1&&&citation_type=wikipedia}} = 3; LSB of 1 is 1, i = 2, FT{{grok:render&&&type=render_inline_citation&&&citation_id=2&&&citation_type=wikipedia}} += 3 so FT{{grok:render&&&type=render_inline_citation&&&citation_id=2&&&citation_type=wikipedia}} = 3; LSB of 2 is 2, i = 4, FT{{grok:render&&&type=render_inline_citation&&&citation_id=4&&&citation_type=wikipedia}} += 3 so FT{{grok:render&&&type=render_inline_citation&&&citation_id=4&&&citation_type=wikipedia}} = 3; LSB of 4 is 4, i = 8, FT{{grok:render&&&type=render_inline_citation&&&citation_id=8&&&citation_type=wikipedia}} += 3 so FT{{grok:render&&&type=render_inline_citation&&&citation_id=8&&&citation_type=wikipedia}} = 3; next i = 12 > 8, stop. Now FT = [0, 3, 3, 0, 3, 0, 0, 0, 3].
-
Update index 2 with value 1: Start at i=2, FT{{grok:render&&&type=render_inline_citation&&&citation_id=2&&&citation_type=wikipedia}} += 1 so FT{{grok:render&&&type=render_inline_citation&&&citation_id=2&&&citation_type=wikipedia}} = 4; i = 4, FT{{grok:render&&&type=render_inline_citation&&&citation_id=4&&&citation_type=wikipedia}} += 1 so FT{{grok:render&&&type=render_inline_citation&&&citation_id=4&&&citation_type=wikipedia}} = 4; i = 8, FT{{grok:render&&&type=render_inline_citation&&&citation_id=8&&&citation_type=wikipedia}} += 1 so FT{{grok:render&&&type=render_inline_citation&&&citation_id=8&&&citation_type=wikipedia}} = 4; next i = 12 > 8, stop. Now FT = [0, 3, 4, 0, 4, 0, 0, 0, 4].
-
Update index 3 with value 4: Start at i=3, FT{{grok:render&&&type=render_inline_citation&&&citation_id=3&&&citation_type=wikipedia}} += 4 so FT{{grok:render&&&type=render_inline_citation&&&citation_id=3&&&citation_type=wikipedia}} = 4; LSB of 3 is 1, i = 4, FT{{grok:render&&&type=render_inline_citation&&&citation_id=4&&&citation_type=wikipedia}} += 4 so FT{{grok:render&&&type=render_inline_citation&&&citation_id=4&&&citation_type=wikipedia}} = 8; LSB of 4 is 4, i = 8, FT{{grok:render&&&type=render_inline_citation&&&citation_id=8&&&citation_type=wikipedia}} += 4 so FT{{grok:render&&&type=render_inline_citation&&&citation_id=8&&&citation_type=wikipedia}} = 8; next i = 12 > 8, stop. Now FT = [0, 3, 4, 4, 8, 0, 0, 0, 8].
-
Update index 4 with value 2: Start at i=4, FT{{grok:render&&&type=render_inline_citation&&&citation_id=4&&&citation_type=wikipedia}} += 2 so FT{{grok:render&&&type=render_inline_citation&&&citation_id=4&&&citation_type=wikipedia}} = 10; i = 8, FT{{grok:render&&&type=render_inline_citation&&&citation_id=8&&&citation_type=wikipedia}} += 2 so FT{{grok:render&&&type=render_inline_citation&&&citation_id=8&&&citation_type=wikipedia}} = 10; next i = 12 > 8, stop. The final state is FT = [0, 3, 4, 4, 10, 0, 0, 0, 10], which encodes the prefix sums: query(1) = 3, query(2) = 4, query(3) = 8, query(4) = 10 [23].
To demonstrate an update after construction, add 5 to index 3 (changing A{{grok:render&&&type=render_inline_citation&&&citation_id=3&&&citation_type=wikipedia}} from 4 to 9): Start at i=3, FT{{grok:render&&&type=render_inline_citation&&&citation_id=3&&&citation_type=wikipedia}} += 5 so FT{{grok:render&&&type=render_inline_citation&&&citation_id=3&&&citation_type=wikipedia}} = 9; i = 4, FT{{grok:render&&&type=render_inline_citation&&&citation_id=4&&&citation_type=wikipedia}} += 5 so FT{{grok:render&&&type=render_inline_citation&&&citation_id=4&&&citation_type=wikipedia}} = 15; i = 8, FT{{grok:render&&&type=render_inline_citation&&&citation_id=8&&&citation_type=wikipedia}} += 5 so FT{{grok:render&&&type=render_inline_citation&&&citation_id=8&&&citation_type=wikipedia}} = 15; next i = 12 > 8, stop. The path follows indices 3 → 4 → 8, propagating the change to responsible partial sums. Now FT = [0, 3, 4, 9, 15, 0, 0, 0, 15] [23].
For a prefix sum query up to index 3 (before this additional update, when FT{{grok:render&&&type=render_inline_citation&&&citation_id=3&&&citation_type=wikipedia}} = 4 and FT{{grok:render&&&type=render_inline_citation&&&citation_id=2&&&citation_type=wikipedia}} = 4): Initialize sum = 0; at i=3, add FT{{grok:render&&&type=render_inline_citation&&&citation_id=3&&&citation_type=wikipedia}} = 4, subtract LSB of 3 (1) so i = 2; add FT{{grok:render&&&type=render_inline_citation&&&citation_id=2&&&citation_type=wikipedia}} = 4, subtract LSB of 2 (2) so i = 0; total sum = 8, matching A{{grok:render&&&type=render_inline_citation&&&citation_id=1&&&citation_type=wikipedia}} + A{{grok:render&&&type=render_inline_citation&&&citation_id=2&&&citation_type=wikipedia}} + A{{grok:render&&&type=render_inline_citation&&&citation_id=3&&&citation_type=wikipedia}} = 3 + 1 + 4 [23]. After the update, the same query yields sum = 9 (from FT{{grok:render&&&type=render_inline_citation&&&citation_id=3&&&citation_type=wikipedia}}) + 4 (from FT{{grok:render&&&type=render_inline_citation&&&citation_id=2&&&citation_type=wikipedia}}) = 13, reflecting the modified prefix.
[23]: Fenwick, P. M. (1994). A New Data Structure for Cumulative Frequency Tables. Software: Practice and Experience, 24(3), 327–336. doi:10.1002/spe.4380240306
Advanced Features
Range Sum Queries
The range sum from index l to r (inclusive) in a Fenwick tree is derived by subtracting the prefix sum up to l-1 from the prefix sum up to r, expressed as \text{RangeSum}(l, r) = \text{PrefixSum}(r) - \text{PrefixSum}(l-1).[12] This operation maintains O(\log n) time complexity, as it involves two prefix sum computations. When l = 1, the query simplifies directly to \text{PrefixSum}(r), avoiding the subtraction.[12]
Fenwick trees employ 1-based indexing to handle edge cases robustly, preventing negative index issues; for instance, queries with l > r conventionally return 0. The structure assumes valid inputs where $1 \leq l \leq r \leq n, with adjustments for 0-based arrays by incrementing indices.[12]
For scenarios involving multiple range queries, batching them sequentially reduces computational overhead by leveraging the tree's efficient navigation, though each query remains independent at O(\log n).[12] The basic Fenwick tree does not support direct range updates; instead, differential arrays can be employed for range updates by applying point updates at the start and end+1 of the range with opposite signs.[12]
A key limitation of the standard Fenwick tree is its design for point updates paired with range queries; to achieve range updates with point queries, the approach inverts this by storing differences in the tree and querying prefixes to reconstruct point values.[12]
Variants and Extensions
One prominent extension of the Fenwick tree is the multi-dimensional variant, particularly the two-dimensional Fenwick tree, which facilitates efficient updates and prefix sum queries on matrices. In this structure, the tree is conceptualized as an array of one-dimensional Fenwick trees, where updates and queries incorporate bitwise operations on both row and column indices, such as iterating with i += i & -i and j += j & -j to traverse the responsible regions in O(log n * log m) time.[12]
The Fenwick tree can be adapted for non-sum operations by replacing the addition operator with other associative operations that have inverses. For bitwise XOR, the tree stores XOR prefixes, enabling range XOR queries by computing the XOR of two prefix results, as XOR is its own inverse, maintaining O(log n) efficiency for updates and queries.[24] For finding range minima, the structure can be extended using two Fenwick trees—one storing minima of subarrays ending at each index and another for subarrays starting at each index—allowing range minimum queries by combining minima from relevant subarrays in O(log n) time.[25] Similar adaptations for maxima are possible by propagating maximum values analogously.[12]
A nested approach, often termed a "Fenwick of Fenwicks," extends the structure for more complex operations like range updates and range queries. To support both, two Fenwick trees are employed: one for point values and another for their prefix sums, allowing range additions via differential updates and range sums via a combination of prefix queries from both trees, achieving O(log n) time per operation.[12] For order statistics, such as finding the k-th smallest element or ranking, a Fenwick tree maintains frequency counts over coordinate-compressed values, enabling binary search on prefix sums for selection and rank operations in O(log^2 n) time.
Recent extensions since 2000 address limitations in data types and sparsity. Floating-point support is achieved through discretization, where non-integer values are mapped to consecutive integers via sorting and ranking, preserving the tree's integer-based indexing while handling real-valued data efficiently. For sparse data, compact Fenwick trees reduce space usage by exploiting known upper bounds on elements and optimizing for dynamic ranking and selection, incurring only a few percent overhead while supporting updates, ranks, and selections in O(log n) time, as demonstrated in structures outperforming prior bit vectors.[26]
Time and Space Complexity
The Fenwick tree achieves efficient performance for dynamic prefix sum queries and updates on an array of size n. The update operation, which modifies an element and propagates the change to affected positions, requires O(\log n) time. This bound arises because the process involves at most \log_2 n steps, each corresponding to a bit in the binary representation of the index, where the least significant set bit determines the range to update.[27]
Similarly, the prefix sum query operation sums disjoint ranges covering the prefix up to a given index in O(\log n) time. It traverses a comparable number of positions by repeatedly adding the value at the current index and clearing the least significant set bit to move to the next relevant range.[27]
The space complexity of a Fenwick tree is O(n), as it is implemented using a single array of size n+1 (with 1-based indexing) to store cumulative sums for the ranges. Constructing the tree from an initial array takes O(n \log n) time, achieved by performing an update for each of the n elements.[28]
A proof sketch for the O(\log n) time bounds relies on the binary decomposition of indices. Each update or query operation visits a number of nodes equal to the number of 1-bits (Hamming weight) in the binary representation of the index, which is at most \lfloor \log_2 n \rfloor + 1 in the worst case and averages approximately \log_2 n / 2 accesses per operation. Since bit operations like isolating the least significant set bit take constant time, the total time is O(\log n). This decomposition ensures each step halves the effective range, traversing the logical height of the implicit binary tree structure.[18]
The Fenwick tree maintains these worst-case O(\log n) bounds for all operations regardless of the input size n or sequence of updates, as the structure remains balanced by design and does not degrade like unbalanced binary search trees.[28]
Comparison with Alternatives
The Fenwick tree offers a significant improvement over the naive prefix sum array for dynamic arrays requiring both updates and queries. While a prefix sum array allows O(1) prefix queries after O(n) preprocessing, any update requires O(n) time to rebuild the entire array by adjusting all subsequent prefix sums.[29] In contrast, the Fenwick tree achieves O(log n) time for both updates and prefix queries, making it far more efficient when updates are frequent, though it trades off the constant-time query speed of static prefix sums.[29]
Compared to segment trees, Fenwick trees provide similar asymptotic performance for point updates and prefix sum queries, both operating in O(log n) time, while also supporting range queries in O(log n) via prefix sum differences.[30] However, Fenwick trees are more space-efficient, requiring only O(n) space versus the O(4n) typically needed for segment trees due to their recursive node structure.[30] Fenwick trees also feature simpler implementation with fewer lines of code and no need for lazy propagation in basic point update scenarios, as they inherently handle the propagation through binary indexing.[12]
Fenwick trees generally outperform square root decomposition for large n in scenarios requiring frequent updates and queries, delivering O(log n) time for both compared to O(1) for point updates and O(√n) for queries in square root decomposition, which divides the array into blocks of size approximately √n.[31] This logarithmic efficiency makes Fenwick trees preferable for high-volume operations on sizable datasets, since log n grows much slower than √n for large n.[32] Square root decomposition, however, can be simpler to implement for certain range update tasks, achieving O(1) for point updates and O(√n) for range updates by adjusting block summaries, whereas standard Fenwick trees require modifications like difference arrays to handle range updates efficiently.[31]
Overall, Fenwick trees strike a balance for point update and prefix/range query workloads with their efficiency and simplicity, but they are less flexible for direct range updates compared to segment trees, which support such operations natively with lazy propagation in O(log n) time.[30] As of 2025, segment trees remain the go-to for versatile range operations, while Fenwick trees excel in memory-constrained or prefix-focused applications without needing advanced features.[33]
Applications
Algorithmic Uses
Fenwick trees are commonly employed in algorithms for counting inversions in an array, where an inversion is defined as a pair of indices (i, j) such that i < j and A > A. To compute this efficiently, the array elements are first discretized to ranks to handle large values, and then processed from right to left (or left to right). For each element, a prefix sum query on the Fenwick tree counts the number of previously inserted elements smaller (or larger) than the current one, followed by an update to insert the current element's frequency. This approach achieves O(n log n) time complexity, improving upon the O(n²) naive method.
Fenwick trees facilitate range queries for maintaining order statistics, such as finding the k-th smallest element in a dynamic set through binary search on prefix sums, enabling efficient order statistic operations in O(log n) time per query or update.[12]
Fenwick trees play a key role in dynamic programming problems, particularly those involving partial sums or maximums over coordinates, such as computing the longest increasing subsequence (LIS). In the O(n log n) LIS algorithm, a Fenwick tree adapted for range maximum queries tracks the smallest tail value for each subsequence length; for each element, a query retrieves the maximum length for values less than the current one, and an update sets the new tail. This method extends to variants like rollercoaster sequences, where multiple Fenwick trees maintain maxima for different run types (increasing or decreasing), allowing computation of the longest such subsequence in O(n log n) time.[34]
In competitive programming, Fenwick trees are a staple for solving problems that demand efficient handling of dynamic frequencies or cumulative sums, such as determining the number of elements less than or equal to a given value x after a series of insertions and deletions. By sorting indices and using prefix sum queries on discretized values, contestants can resolve queries like range counts in online judges, often as part of broader solutions involving coordinate compression. This utility stems from the structure's origins in maintaining cumulative frequency tables for efficient access.[12]
Fenwick trees are also used in parallel clustering algorithms, such as density peaks clustering, where they enable efficient computation of distances and neighbor counts in parallel settings, achieving faster exact clustering.[4]
Real-World Implementations
Fenwick Trees find practical deployment in machine learning frameworks, where they facilitate efficient prefix sum computations essential for advanced attention mechanisms. In the Log-Linear Attention model, a Fenwick Tree-based partitioning scheme hierarchically divides input sequences into power-of-two segments, allowing queries to access a logarithmic number of hidden states that summarize past context at multiple scales. This approach enhances the expressiveness of linear attention while keeping computational costs at O(T log T) during training and memory at O(log T) during decoding, outperforming standard linear attention on tasks like multi-query associative recall (92.9% accuracy vs. 89.6%) and language modeling (21.44 perplexity vs. 21.73).[35]
In game development, Fenwick Trees support real-time scoring systems, such as maintaining dynamic leaderboards in multiplayer environments with frequent score updates and range queries for ranking players. For instance, game servers can use them to update individual player scores and query cumulative totals efficiently, ensuring low-latency responses in competitive online games. This application leverages the structure's ability to handle concurrent updates without race conditions in real-time processing.[36]
In the financial sector, Fenwick Trees are adopted in trading software for computing cumulative position sums and running totals over transaction histories, providing efficient handling of high-frequency data streams. This supports real-time risk assessment and portfolio analysis by enabling logarithmic-time updates and queries on voluminous market data.[36]