Fact-checked by Grok 2 weeks ago

Dynamic array

A dynamic array is a resizable, array-based that automatically adjusts its to accommodate a varying number of elements, supporting efficient and dynamic growth or shrinkage. Unlike fixed-size arrays, it maintains a distinction between its (the current number of elements) and (the allocated space in the underlying array), allowing insertions and deletions without manual resizing by the user. Dynamic arrays achieve efficiency through a resizing : when the fills to during an insertion (typically at the end), it allocates a new underlying with increased —often doubled—and copies the existing elements over, then deallocates the old . This process, while occasionally costly at O(n) time for the copy operation where n is the current size, results in an amortized O(1) per insertion due to the , which spreads the resizing cost across many operations. Shrinking may occur when the size falls below a (e.g., half ) to reclaim , though this is less common and often implemented with a factor like halving the . and modification by index remain O(1) (search is O(n)), making dynamic arrays suitable for scenarios requiring frequent appends and , such as implementing stacks, queues, or lists. Implementations of dynamic arrays are standard in modern programming languages to abstract away manual memory management. In C++, the std::vector class from the Standard Template Library provides this functionality, supporting contiguous storage with automatic reallocation. Java's ArrayList class in the java.util package similarly uses a dynamic array internally, with generics for type safety and methods like add() and remove() handling resizing transparently. Python's built-in list type is also a dynamic array, optimized for over-allocation during growth to minimize reallocations, and it supports heterogeneous elements. These built-in structures leverage the dynamic array's strengths while mitigating drawbacks like occasional high-cost resizes through amortized analysis.

Fundamentals

Definition and Characteristics

A dynamic array is a variable-size list that serves as a wrapper around a contiguous block of , facilitating efficient to elements by index and automatic resizing to accommodate insertions or deletions. It overcomes the fixed-capacity limitation of static arrays by dynamically allocating and managing underlying storage, allowing the collection to grow or shrink as needed during program execution. Key characteristics of dynamic arrays include the separation between logical size—the current number of valid elements—and physical capacity—the total allocated slots in the backing , which may exceed the size to buffer future additions without immediate resizing. Access to any element by its index occurs in constant time on average, O(1), due to the contiguous layout enabling direct computation of memory addresses. Additionally, they support automatic expansion when the size approaches capacity, ensuring seamless addition of elements beyond initial allocations. The concept emerged in the late 1960s through early programming languages, with introducing dynamic arrays that could be sized at runtime using variable bounds, such as [m:n] declarations creating arrays of n-m+1 elements upon execution. Further evolution in the 1970s included dynamic list structures in languages like , though these often used non-contiguous implementations; the contiguous dynamic array model was formalized and widely adopted in standard libraries post-1980s. Basic operations encompass appending elements to the end (efficiently extending the size), accessing or modifying elements by index, removing elements from the end, and prepending (which is less efficient due to potential element shifting). These operations maintain the structure's utility as a interface for ordered collections.

Comparison to Static Arrays

Static arrays are allocated with a fixed size determined at or initial allocation, providing constant-time O(1) access to elements via indexing but necessitating manual reallocation and element copying if resizing is required. In contrast, dynamic arrays support runtime resizing without direct user intervention, automatically handling capacity increases through internal mechanisms, though this incurs occasional operations for copying elements during expansion. This fundamental difference highlights the trade-offs: static arrays offer simplicity and predictability with no overhead from unused memory, making them suitable for scenarios with known, fixed data sizes, such as representing rows in a where dimensions are predefined. Dynamic arrays, however, provide greater flexibility for handling variable-length data, like processing user inputs that may grow unpredictably, at the expense of potential space waste due to over-allocation and the complexity of . While both structures share efficient , static arrays avoid the performance variability introduced by resizing in dynamic ones, favoring applications where size stability ensures optimal resource use.

Core Implementation

Size and Capacity Management

Dynamic arrays maintain two key metrics to track their state: the logical size, which denotes the current number of elements actively stored, and the physical , which indicates the total number of slots available in the underlying fixed-size array. The logical size begins at 0 for an empty dynamic array, representing no elements present, while the physical capacity starts as a small positive , often 4 or , to accommodate initial insertions without immediate reallocation. Upon initialization, a dynamic array allocates for its backing based on the initial , which is typically greater than zero to defer the first resizing operation and improve efficiency for small datasets; this allocation is managed through a pointer to the array or equivalent indexing mechanism in the language's memory model. For instance, in C++'s std::vector, the initial capacity may be 0 in some implementations, but it grows as needed upon element addition. The structure often includes fields to store these values explicitly, such as a pointer to the data array, the current size, and the capacity, allowing constant-time queries for both. During maintenance, appending an element increments the logical size by one if the current size is below the , placing the new at the end of the used portion without altering the allocated space; if the size equals the , resizing is triggered, though the details of this process are handled separately. Conversely, removing an , such as via pop-back, simply decrements the logical size and shifts the end pointer or accordingly, without reducing the physical unless an explicit operation is invoked to reclaim unused . This asymmetric handling—expanding but not contracting automatically—helps balance and usage. A representative implementation in pseudocode might define the dynamic array as a structure with dedicated fields for these attributes:
struct DynamicArray {
    void* [data](/page/Data);      // Pointer to the backing array
    size_t [size](/page/Size);     // Logical size: number of elements stored (initially 0)
    size_t [capacity](/page/Capacity); // Physical capacity: allocated slots (initially e.g., 16)
};
This design enables efficient tracking and access, with the data pointer referencing contiguous memory of capacity elements, of which only the first size are in use.

Resizing Mechanisms

Dynamic arrays implement resizing to accommodate varying numbers of elements by dynamically allocating and deallocating for the underlying . The primary resizing mechanism is , which is triggered when an insertion would exceed the current . In this process, a new of larger size is allocated, all existing elements are copied from the old to the new one in O(n) time where n is the current size, the pointer to the is updated to the new , and the old is deallocated. This ensures contiguous is maintained while allowing growth beyond initial limits. Contraction, though optional in many implementations to avoid unnecessary overhead, may be performed to reclaim unused memory when the array becomes underutilized. For instance, if the number of elements falls below a quarter of the , the array can be shrunk by allocating a new smaller array, copying the elements over in O(n) time, updating the pointer, and freeing the old storage. This mechanism helps prevent excessive memory waste but is not always automatic, as seen in standard library implementations where explicit methods are provided for shrinking. During both and , element relocation preserves the order of elements by copying them sequentially from the source to the destination . For efficiency with primitive or plain old data types, optimized memory copy functions like memcpy may be used; for complex types, a invokes copy constructors or operators to ensure proper copies. Implementations handle generic types through templates, generics, or void pointers with type to manage relocation without . Edge cases in resizing include initial allocation, where an empty dynamic array with zero capacity allocates its first block upon the initial insertion, often starting with a small default size. Zero-capacity states are handled by immediately triggering expansion on the first add operation. Memory allocation failures during resizing typically result in exceptions being thrown, such as std::bad_alloc in C++ or in Java, or error codes returned in lower-level languages to allow graceful failure handling. The following pseudocode illustrates a generic resize operation for a dynamic array structure:
void resize(DynamicArray* a, size_t new_capacity) {
    if (new_capacity == a->capacity) return;
    
    void* new_data = allocate(new_capacity * sizeof(a->element_type));
    if (new_data == NULL) {
        // Handle allocation failure, e.g., throw exception or return error
        return;
    }
    
    // Copy elements from old to new, preserving order
    for (size_t i = 0; i < a->size; ++i) {
        copy_element(&((char*)new_data)[i * sizeof(a->element_type)], 
                     &((char*)a->data)[i * sizeof(a->element_type)]);
    }
    
    // Deallocate old data if applicable
    if (a->data != NULL) {
        deallocate(a->data);
    }
    
    a->data = new_data;
    a->capacity = new_capacity;
}
This function can be invoked during expansion or contraction, with the new_capacity determined by the specific trigger. The frequency of such resizes is mitigated by strategies that allocate extra capacity in advance.

Amortized Cost Analysis

evaluates the of operations by considering the average cost over a worst-case of operations, rather than the cost of individual operations in isolation. This approach is particularly useful for dynamic arrays, where occasional expensive resizes can be offset by many inexpensive operations. Common techniques include the aggregate method, which sums the costs over a and divides by the number of operations, and the , which assigns credits to account for future expenses. The aggregate method demonstrates that for a sequence of n insertions into an initially empty dynamic array using geometric growth (such as doubling the upon overflow), the total cost is O(n), yielding an amortized cost of O(1) per insertion. Each insertion typically costs O(1) time for adding an , but resizes incur copying costs: when the array grows from $2^k to $2^{k+1}, $2^k are copied. Over n insertions, resizes occur at capacities 1, 2, 4, ..., up to the largest power of 2 less than or equal to n, and the total copying cost is the sum of a : \sum_{k=0}^{\lfloor \log_2 n \rfloor} 2^k = 2^{\lfloor \log_2 n \rfloor + 1} - 1 < 2n. Adding the O(n) cost of individual insertions gives a total of at most $3n operations, so the average per insertion is O(1). The potential method provides a more flexible framework by defining a potential function \Phi that measures the "stored work" in the data structure, ensuring that amortized costs remain bounded even for varied operation sequences. For a dynamic array, a suitable potential is \Phi = 2 \times \text{size} - \text{capacity}, where \text{size} is the current number of elements and \text{capacity} is the allocated size (initially \Phi = 0). For an append operation without resize, the actual cost \hat{c}_i = 1 (adding the element) and \Delta \Phi = +2 (size increases by 1, capacity unchanged), yielding an amortized cost \tilde{c}_i = \hat{c}_i + \Delta \Phi = 3. During a resize from capacity n to $2n (when size = n), the actual cost is O(1) + n for copying and adding, but \Delta \Phi \approx 2 - n (after append, size = n+1, new capacity = $2n), resulting in an amortized cost \tilde{c}_i \approx 3. Thus, over any sequence, the amortized cost per append is at most 3 (or O(1)). This analysis assumes operations are limited to appends at the end of the array and focuses on resizing costs, ignoring constant-time factors such as memory allocation overhead or deallocation during contractions.

Growth Strategies

Expansion Techniques

Expansion techniques in dynamic arrays determine how the underlying storage capacity is increased when the current allocation is exhausted, directly impacting overall efficiency. One basic method is incremental expansion, where the capacity is augmented by a fixed constant amount upon each resize, such as adding 10 slots regardless of the current size. This approach, while simple, results in quadratic total time complexity, O(n²), for inserting n elements in the worst case, because the cumulative cost of repeated copies grows with the square of the array size. A more efficient strategy is geometric expansion, which scales the capacity multiplicatively by a constant factor φ > 1, commonly φ = 2 (doubling). When the array fills, a new allocation is made at size φ times the current capacity, and elements are copied over. This method bounds the total resizing cost to O(n) for n insertions, as each element is copied a logarithmic number of times across expansions. Geometric expansion introduces some wasted space due to over-allocation but enables amortized O(1) insertion time, unlike incremental methods that fail to amortize effectively. To optimize for both small and large arrays, hybrid approaches start with incremental or fixed over-allocations for initial growth and transition to geometric scaling beyond a certain threshold, reducing overhead in early stages while maintaining efficiency later. Python's implementation exemplifies this: for small lists, it uses predefined over-allocations (e.g., jumping to capacities of 4, 8, ), then applies a multiplicative factor of roughly 1.125 for larger resizes via the formula in list_resize, such as (new_size + (new_size >> 3) + 6) & ~3, ensuring minimal reallocations. Pseudocode for a basic geometric expansion might appear as follows, typically invoked before an insertion that would exceed capacity:
if (size == capacity) {
    new_capacity = max(initial_capacity, capacity * growth_factor);
    // Allocate new array of size new_capacity
    // Copy size elements from old to new array
    // Deallocate old array and update pointers
    capacity = new_capacity;
}
This technique underpins the amortized benefits of dynamic arrays, where individual operations remain efficient on average.

Growth Factor Selection

The selection of the growth factor in dynamic arrays is guided by the need to achieve amortized constant-time operations while minimizing memory overhead, with the factor φ required to be greater than 1 to ensure the total resizing costs remain linear in the number of elements. Practical implementations commonly employ factors in the range of 1.5 to 2, as seen in Java's ArrayList (1.5) and commonly 2 in many implementations of C++'s std::vector, such as GCC's libstdc++. Larger growth factors, such as 2, reduce the frequency of resizing operations—lowering the amortized insertion cost to approximately 3 units per (1 for insertion plus amortized 2 copies)—but result in greater temporary space waste, reaching a maximum of (\phi - 1) \times current size immediately after expansion. Conversely, smaller factors like 1.5 increase resize and raise the amortized to approximately 4 units but limit peak waste to approximately 33% of ( \frac{\phi - 1}{\phi} ), offering better space efficiency during variable load conditions. The maximum unused space following a resize can be expressed as less than (\text{[growth factor](/page/Growth_factor)} - 1) \times \text{current size}, highlighting the direct between the factor and potential over-allocation. Empirical selection of growth factors often accounts for hardware and system constraints, such as line sizes and memory allocation overhead, to optimize in real-world scenarios; for instance, factors are chosen to avoid misalignment that could degrade efficiency or trigger excessive allocator calls. In low-level languages like , considerations include mitigating memory fragmentation from frequent small allocations during resizes, while high-level languages with garbage collection, such as or , prioritize factors that reduce pressure on the collector by smoothing allocation patterns.

Performance Characteristics

Time Complexity

Access by index in a dynamic array is achieved in constant time, O(1), in both worst-case and average-case scenarios, owing to the contiguous allocation that allows direct computation of the from the . Appending an to the end of a dynamic array has an amortized of O(1), as the occasional resizing operation, which copies all elements to a larger array, is spread across multiple appends; however, in the worst case, a single can take O(n) time when resizing occurs. This amortized bound arises from the resizing mechanism analyzed earlier, ensuring efficient overall performance for sequences of appends. Insertion or deletion at an arbitrary position in the dynamic array requires shifting subsequent elements to maintain contiguity, resulting in O(n) time complexity in the worst and average cases, where n is the current number of elements; operations at the end can be O(1) if no resizing is triggered. Unordered search within a dynamic array typically involves a linear scan, yielding O(n) time complexity for both worst-case and average-case performance. Iterating over all elements in a dynamic array takes O(n) total time, as each element is accessed sequentially in constant time per element. These time complexities use big-O notation to denote upper bounds, with practical constants influenced by element copying during shifts or resizes, though the asymptotic behavior remains dominant for large n.

Space Complexity and Efficiency

Dynamic arrays have a base space complexity of O(n) to store n elements in the underlying , with an additional O(1) overhead for such as the current , , and a pointer to the . Over-allocation during resizing leads to waste, as the exceeds the to support efficient future insertions; in the immediately after expansion, the waste can reach up to (\phi - 1) \times n, where \phi is the . The average space usage across a sequence of insertions is approximately \frac{\phi}{\phi - 1} \times n, reflecting the cumulative effect of allocations. Resizing operations require auxiliary space of O(n), as a temporary of the new must be allocated and populated with elements from the old before the latter is deallocated. Space efficiency is quantified by the load factor, defined as the ratio of to (ranging from 0 to 1); implementations typically aim to maintain an average load factor above 0.5 to limit waste relative to the number of elements stored. In modern systems, additional overhead arises from cache line alignment of the (typically 64 bytes), which may introduce of up to one cache line, and from 64-bit architectures where pointers and fields consume 8 bytes each, increasing the fixed overhead to around 24 bytes.

Variants

Buffer-Oriented Variants

Buffer-oriented variants of dynamic arrays modify the standard linear structure by incorporating gaps or circular wrapping to optimize operations at specific positions, such as cursor locations in text editing or endpoints in queue-like scenarios. These adaptations pre-allocate or enable wrap-around behavior to reduce the frequency of costly element shifts, particularly for frequent insertions or deletions clustered near a designated point. Unlike the basic dynamic array, which amortizes costs across end-based appends, buffer-oriented designs trade general-purpose flexibility for efficiency in targeted access patterns. The represents a prominent buffer-oriented variant, consisting of a dynamic array divided into three conceptual parts: content before a cursor position, an empty "gap" of pre-allocated space at the cursor, and content after the cursor. Insertions and deletions occur within or adjacent to the gap, allowing constant-time operations when the cursor remains near this space, as elements are simply copied into the gap without shifting the entire array. Cursor movement involves shifting the gap by copying segments of text between the pre- and post-cursor buffers, which is efficient for small displacements but can require time for large moves across the buffer. When the gap fills, the array resizes via a grow operation, similar to standard dynamic array expansion, incurring cost. This structure excels in scenarios with localized edits, such as text editors, where the gap follows the user's cursor to minimize shifts. For instance, the GNU Emacs editor employs a implementation, tracking buffer size, gap start, and gap size in its core structure to support efficient single-cursor editing. Despite its advantages, the gap buffer retains limitations inherent to linear arrays, including O(n) costs for insertions far from the gap, which necessitate moving the gap and shifting large portions of data. Resizing the gap also introduces occasional linear-time overhead, making it less suitable for uniformly distributed edits across the entire buffer. These trade-offs position the gap buffer as an extension of the dynamic array tailored for sequential, cursor-following modifications rather than random access. Another key buffer-oriented variant is the , which uses a fixed-capacity dynamic array treated as a , where indices wrap around from end to start via on head and tail pointers. This enables O(1) time for enqueue and dequeue operations at the ends, as no element shifting occurs; instead, the tail advances to overwrite the oldest data when full. Unlike resizable dynamic arrays, circular buffers maintain constant size to ensure predictable performance, avoiding the overhead of reallocations, though special handling—such as doubling and reinserting elements—is required for dynamic growth. They differ from standard dynamic arrays by prioritizing wrap-around for endpoint access, optimizing for patterns without the amortized costs of frequent resizes. Circular buffers find widespread use in real-time systems, such as (DSP), where fixed and constant-time operations are critical to meet timing constraints. In DSP applications like (FIR) filters, the buffer overwrites the oldest sample with the newest, requiring only a single write per update instead of shifting the entire , thus minimizing memory accesses and power consumption in environments. This avoidance of full resizes ensures bounded execution times, making circular buffers ideal for in sensors or audio processing. However, their fixed capacity limits adaptability to varying data volumes, and multi-threaded implementations demand careful to prevent race conditions on pointers.

Hierarchical Variants

Hierarchical variants of dynamic arrays employ multi-level structures, such as trees or tiers of arrays, to mitigate the time cost of insertions in the middle of standard dynamic arrays, achieving sublinear through balanced redistribution across levels. These designs trade off additional space for pointers and higher constant factors to enable efficient and modifications, particularly beneficial for large-scale sequences where locality improves with tiered organization. Tiered vectors, introduced in the late , organize elements into multiple levels of arrays, where higher tiers contain pointers to lower-tier s. For a two-level (k=2) configuration, the root array holds approximately √n pointers to s of size √n, allowing insertions to require shifting only within a single and occasional rebalancing across levels, yielding amortized O(√n) time for middle insertions and O(1) access. Resizing in tiered vectors propagates upward: when a fills, it splits and updates the parent tier, with full re-tiering occurring infrequently to maintain balance. This structure enhances cache performance for large n by localizing operations to smaller, contiguous s compared to shifting entire arrays in flat dynamic arrays. The hashed array tree (HAT), proposed in 1996, builds a dynamic array as a tree where each node is an array of fixed-size pointers to child arrays or elements, using hash-based indexing to map indices efficiently. The root array typically holds 32 to 64 pointers, enabling amortized O(1) append operations through local expansions and O(log n) worst-case time for insertions anywhere, as updates traverse and potentially resize a logarithmic number of levels. Construction involves allocating child arrays on demand, with resizing cascading from leaves to root only when a level overflows, preserving overall compactness. HATs support functional persistence through their , allowing efficient creation of modified copies by sharing unchanged parts. Both tiered vectors and HATs incur higher space overhead from pointers—roughly doubling usage relative to flat arrays—and elevated constants due to indirection, making them less suitable for small n but advantageous for applications requiring frequent middle insertions or persistence.

Language Implementations

Low-Level Languages

In low-level languages like C, dynamic arrays require manual memory management to handle resizing. Developers typically allocate initial memory using malloc and resize the array by calling realloc when more space is needed, which attempts to expand the existing block in place or allocates a new one and copies the contents. This approach preserves data continuity but can fail if contiguous memory is unavailable, necessitating error handling for the returned null pointer. Libraries such as GLib's GArray abstract this process, automatically growing the array by allocating space up to the nearest power of two to accommodate the required size upon overflow to amortize reallocation costs over multiple insertions. In C++, the standard library's std::vector provides a built-in dynamic array with automatic via the (RAII) idiom, ensuring deallocation upon scope exit to prevent leaks. It reserves capacity ahead of the current size—often doubling it ( of approximately 2) during resizes—to support efficient insertions—and allows custom allocators for specialized policies, such as pool allocation. For example, calling vector.push_back(value) appends an element; if the capacity is exhausted, it triggers reallocation, copying or moving existing elements (using move semantics in C++11 and later for efficiency) to the new buffer before inserting the new one. Despite these facilities, challenges persist in low-level implementations. In C, failing to call free on the array pointer after use leads to memory leaks, as there is no automatic cleanup. Type alignment must be considered during allocation to avoid undefined behavior, particularly with realloc, which assumes the original pointer was aligned properly. In C++, resizes in std::vector demand exception safety: the container provides the strong guarantee that either the operation succeeds fully or leaves the vector unchanged, but custom types must adhere to no-throw move constructors to maintain this. The C23 standard enhances language features like bit-precise integers and mandatory variable-length arrays but retains manual resizing for dynamic arrays via realloc.

High-Level Languages

In high-level languages with automatic , dynamic arrays are typically implemented as built-in types that abstract away manual resizing and deallocation, leveraging garbage collection (GC) to handle memory reclamation. These implementations prioritize developer convenience and safety, providing amortized constant-time append operations through over-allocation strategies that minimize reallocations. For instance, Java's ArrayList uses an Object[] as its backing store, incurring auto-boxing overhead when storing primitive types due to the need to wrap them in objects for the array. The default initial capacity is 10 elements, and it grows by a factor of 1.5 (computed as oldCapacity + (oldCapacity >> 1)) when full, ensuring efficient expansion without user intervention. Python's built-in list type employs a similar approach in its CPython implementation, using PyMem_Resize to adjust the underlying PyObject** array. It starts with minimal allocation (capacity 0 for empty lists) and over-allocates by approximately 1.125 times the required size initially via the formula (new_size + (new_size >> 3) + 6) & ~3, transitioning to a more geometric growth pattern for larger sizes to achieve amortized O(1) appends. This over-allocation, combined with GC-managed freeing, reduces the frequency of costly resizes. Python 3.12 and later versions include broader interpreter optimizations under the Faster CPython project, enhancing overall operations including resizing efficiency. Other languages like and Go offer dynamic array-like structures with built-in safety features. 's Vec<T> doubles its capacity upon growth (via RawVec::grow), providing through and borrowing rules that prevent issues like buffer overflows, while the allocator handles deallocation automatically. In Go, slices serve as dynamic arrays, with the runtime's growslice function managing by doubling capacity for slices under 1024 elements and then growing by 1.25 times thereafter; developers use the append function manually, but the runtime ensures amortized linear performance. These implementations commonly support optimized iterators for traversal—such as Java's Iterator interface or Python's generator expressions—and language-specific comprehensions (e.g., Python's list comprehensions) that leverage the underlying array for efficient iteration without explicit bounds checking in hot paths. Automatic GC in these environments frees unused capacity during collection cycles, though developers can trim excess via methods like Java's trimToSize() or Python's del slicing to mitigate temporary memory overhead.

References

  1. [1]
    Javanotes 5.1.2, Section 7.3 -- Dynamic Arrays and ArrayLists
    An array-like object that changes size to accommodate the amount of data that it actually contains is called a dynamic array.
  2. [2]
    [PDF] CS 261: Data Structures Dynamic Arrays
    • Size: – Current number of elements. – Managed by an internal data value. • Capacity: – Number of elements that a dynamic array.
  3. [3]
    [PDF] 1 The ubiquitous example: Dynamic arrays (lists)
    Sep 5, 2022 · Every (good) programming language has an array type. An array is usually defined to be a fixed-size. contiguous sequence of elements.<|control11|><|separator|>
  4. [4]
    Array based data structures - cs.wisc.edu
    Background. Our first data structure is referred to as a dynamic array: an array which automatically changes the number of elements it can hold whenever the ...
  5. [5]
    12. Collections and Generics | CS 2110
    Definition: Dynamic Array. A dynamic array is a data structure that stores its data in an array. This array is automatically resized to add more capacity ...The List Adt · Dynamicarraylist Class · Complexity Of...
  6. [6]
    The Vector Class - Mirkwood
    A vector is a dynamic array. The size does not need to be known at runtime ... // choice 1 using namespace std; vector< ...> // or choice 2 std::vector<...>
  7. [7]
    [PDF] Introduction Lecture 1b: Amortized analysis and dynamic arrays
    If one potential function gives small amortized time per operation, all sequences of operations will be fast in actual time,
  8. [8]
    Lecture 2: Data Structures and Dynamic Arrays | Introduction to ...
    Data structures are ways to store data with algorithms that support operations on the data. These collection of sorted operations are interfaces.
  9. [9]
    [PDF] Dynamic Array - Department of Computer Science
    You will implement a DynamicArray ADT (Abstract Data Type), a C++ class that will allow us to use a dynamic integer array without worrying about all the memory ...
  10. [10]
    [PDF] Dynamic Arrays - Computer Science - JMU
    ○ Issue: Arrays are fixed-length. ○ Naive solution: Resize the array's memory. – Problem: no guarantee that we can do this! ○ More robust solution: Dynamic ...
  11. [11]
    [PDF] ALGOL68
    Dynamic arrays: [m:n] INT obj. -- When encountered, array with n-m+1 locations created. Page 9. 9. Continue Structuring Primitives. • FLEX ARRAYs -- change size ...
  12. [12]
    [PDF] Chapter 10 Pointers & Dynamic Arrays - Csl.mtu.edu
    Static vs Dynamic Arrays. ○ Static arrays. – int nums[SIZE]. – Size is known at compile time. – Fixed size. ○ Dynamic Arrays. – int* nums = new int[SIZE]. – ...
  13. [13]
    Static arrays - Emory CS
    Pros and cons of static vs dynamic arrays. Dynamic arrays: (Java !) Advantage: flexible, you can increase the size if needed. Disadvantage: slower (need one ...
  14. [14]
    std::vector<T,Allocator>::capacity - cppreference.com
    - **Definition**: `std::vector<T,Allocator>::capacity()` returns the number of elements the container has currently allocated space for.
  15. [15]
    [PDF] Optimal resizable arrays - arXiv
    May 29, 2023 · Arrays as defined here are inherently of a fixed size. A resizable array, also known as a dynamic array or dynamic table, is an array whose size ...
  16. [16]
    dynamic_arrays - James Madison University
    Dynamically Shrinking Arrays (5%)​​ The resizing operation above provides an efficient mechanism for ensuring that the AList always has adequate capacity. ...
  17. [17]
    ArrayList (Java Platform SE 8 )
    ### Summary of ArrayList Resizing Process
  18. [18]
    [PDF] Dynamic Arrays and Amortized Analysis - CUHK CSE
    If A is sparse, shrink the array as follows: Initialize an array A0 of length 2n. Copy all the n elements of A over to A0. Destroy A, and replace it with A0.Missing: contraction mechanism
  19. [19]
    cpython/Objects/listobject.c at main · python/cpython
    **Insufficient relevant content.** The provided content is a GitHub page snippet lacking the actual code and comments from `listobject.c`. It contains only navigation, metadata, and footer information, with no details on list resizing, growth factor, over-allocation, or handling of small vs. larger lists.
  20. [20]
    [PDF] Lecture 21: Amortized Analysis 1 Dynamic Array Problem
    Increasing the size by one every time will take O(n2) time for n operations, and the amortized running time will be much slower than we had expected. How much ...Missing: resizing | Show results with:resizing
  21. [21]
    [PDF] Module 2: List ADT - Jackson State University
    Hence, a dynamic array-based implementation will now hold up 128,000 * 32 ... For read and modify operations, dynamic arrays work in constant time and are always ...
  22. [22]
    [PDF] Searching
    Time Complexity. Space Complexity. Average. Worst. Worst. Depth First Search ... Dynamic Array. 0(1). O(n). O(n). O(n). 0(1). O(n). O(n). O(n). O(n). Singly- ...
  23. [23]
    [PDF] Time Complexity - Computer Science & Engineering
    then these algorithms run in O(n) time, because when they make a new dynamic array, all of the elements of the old array must be copied over to it one at a time ...
  24. [24]
    [PDF] Algorithm Design and Analysis
    A more-dynamic array. • We can add a “pop” operation to our list: removes the last item. • We want to not use space larger than to store a list of size. 18.
  25. [25]
    Gap Buffer Data Structure - GeeksforGeeks
    Jul 28, 2025 · Gap Buffer is a data structure used for editing and storing text in an efficient manner that is being currently edited. It is also similar to an ...
  26. [26]
    Gap Buffers Are Not Optimized for Multiple Cursors - null program
    Sep 7, 2017 · A gap buffer is really a pair of buffers where one buffer holds all of the content before the cursor (or point for Emacs), and the other buffer holds the ...
  27. [27]
  28. [28]
    Circular Buffer | Baeldung on Computer Science
    Mar 18, 2024 · For example, in real-time systems where we must add and remove data quickly, a circular buffer can be a more efficient choice. The structure ...
  29. [29]
    Circular Buffer: A Critical Element of Digital Signal Processors
    Nov 13, 2017 · This article discusses circular buffering, which allows us to significantly accelerate the data transfer in a real-time system.Linear Buffering · Circular Buffering · A Dsp Processor Versus A...
  30. [30]
    [PDF] Tiered Vector: - An Efficient Dynamic Array for JDSL - Brown CS
    Aug 27, 1998 · We present a simple variation on the Vector, termed the Tiered Vector. This data structure provides constant time bounds for the elemAtRank(r) ...
  31. [31]
    [1711.00275] Fast Dynamic Arrays - arXiv
    Nov 1, 2017 · We present a highly optimized implementation of tiered vectors, a data structure for maintaining a sequence of n elements supporting access in time O(1)
  32. [32]
    Tiered Vectors: Efficient Dynamic Arrays for Rank-Based Sequences
    We describe a data structure, the tiered vector, which is an implementation of the Vector ADT that provides O(1/∈)worst case time performance for rank-based ...
  33. [33]
    [PDF] Toward Automatic Data Structure Replacement for Effective ... - LLVM
    GArray double ... DDT also de- tected that Trimaran's Set exploits list-based implementation and. GLib's GQueue is implemented using a doubly-linked list.
  34. [34]
    What's New In Python 3.12 — Python 3.14.0 documentation
    This article explains the new features in Python 3.12, compared to 3.11. Python 3.12 was released on October 2, 2023. For full details, see the changelog.Missing: resizing | Show results with:resizing
  35. [35]
    alloc/vec/ mod.rs - Rust Documentation
    ... growth factor. Whatever 381/// strategy is used will of course guarantee *O ... Vec` does not guarantee any particular growth or allocation strategy.