Double-ended queue
A double-ended queue (commonly abbreviated as deque and pronounced "deck") is an abstract data type in computer science that generalizes both queues and stacks by allowing the insertion and deletion of elements at either end of the sequence.[1] This structure maintains a linear order of elements, with access restricted to the front and back endpoints, enabling flexible first-in-first-out (FIFO) or last-in-first-out (LIFO) behaviors depending on the operations used.[2] The term "deque" was coined by E. J. Schweppe, as reported by Donald Knuth, to describe this double-ended capability, distinguishing it from single-ended structures like traditional queues.[2][3]
The core operations of a deque typically include adding an element to the front (addFront, push_front, or push), adding to the back (addBack, push_back, or inject), removing and returning the front element (removeFront, pop_front, or pop), and removing and returning the back element (removeBack, pop_back, or eject).[1] Additional utility operations often encompass checking the front or back element without removal (front or peekFront, back or peekBack) and determining if the deque is empty (isEmpty).[4] These operations are designed to execute in constant time, O(1), to ensure efficiency for dynamic sequences.[4] Variants like input-restricted or output-restricted deques limit insertions or deletions to one end, reducing to stacks or queues as special cases.[2]
Deques are commonly implemented using a doubly linked list, which provides direct access to both endpoints via head and tail pointers, facilitating all operations in constant time without fixed size limitations.[4] Alternatively, a circular array (or dynamic array) implementation uses modular indexing with head and tail indices that wrap around the array bounds, offering space efficiency but requiring resizing for growth.[5] Both approaches avoid shifting elements for endpoint modifications, unlike single-ended arrays.[1]
In practice, deques find applications in algorithm design and systems programming, such as buffering input/output streams where data arrives and is consumed from variable ends, simulating printer job queues with priority adjustments, or modeling simulations like traffic flow.[1] They are particularly useful in pathfinding algorithms, like maze solving, where backtracking involves adding paths to one end and exploring from the other, or in undo mechanisms for text editors by treating operations as reversible additions to the front or back.[4]
Introduction and Terminology
Definition and Basic Concept
A double-ended queue (deque) is an abstract data type that supports the insertion and removal of elements from both its front and rear ends, providing flexibility beyond traditional linear structures.[6] This structure generalizes both the queue, which follows a first-in, first-out (FIFO) discipline by adding to the rear and removing from the front, and the stack, which adheres to a last-in, first-out (LIFO) order by operating solely at one end; a deque encompasses both behaviors as special cases by restricting operations to specific ends.[6]
It is pronounced /dɛk/, rhyming with "deck," to distinguish it from the verb "dequeue" associated with standard queues.[6]
For instance, a deque can emulate a traditional queue by exclusively inserting elements at the rear and removing them from the front, thereby maintaining FIFO ordering while retaining the capability for alternative usage patterns.[6]
Naming Conventions
The preferred term for this data structure in computer science is "deque," an abbreviation for "double-ended queue," pronounced like "deck." This nomenclature was coined by E. J. Schweppe, as documented by Donald E. Knuth in his foundational text on algorithms.[7] The term emerged in computing literature during the 1960s, reflecting early efforts to formalize versatile queue variants.
To prevent ambiguity with the verb "dequeue," which describes removing an element from the front of a standard queue, the spelling "dequeue" for the data structure is generally deprecated in technical writing.[8] Alternative designations include the expanded "double-ended queue" or, less frequently, "double-sided queue," though these are typically used for clarity in introductory contexts rather than formal references.
In some older literature, shorthand forms like "deq" appear, but "deque" has become the convention. Endpoint designations vary slightly by source or implementation, with "front" and "back" preferred in many modern descriptions over "head" and "tail" to emphasize bidirectional access without overlap in terminology from unidirectional queues. For precision and consistency, "deque" is recommended in formal and academic discussions.
Variants and Distinctions
Sub-types of Deques
Deques can be restricted in their operations to create specialized variants that limit insertions or deletions to specific ends, offering trade-offs in flexibility for efficiency in certain applications. These restrictions distinguish them from full deques while retaining double-ended capabilities in a limited form.[2]
An input-restricted deque allows insertions only at one end, typically the rear, while permitting deletions from both the front and rear. This variant, also known as a queue-ended stack or "quack," supports enqueue operations exclusively at the rear but enables dequeue from either end, making it suitable for scenarios where input order is fixed but flexible output is needed.[9][2] The concept originates from discussions in fundamental algorithms literature, where such restrictions simplify analysis of permissible sequences.[2]
Input-restricted deques have been applied in algorithms requiring ordered input with dual-output flexibility, such as Breu's algorithm for finding maximum values between moving pointers in a sequence, where the structure maintains invariants efficiently without full double-ended insertions.
An output-restricted deque, conversely, permits insertions at both ends but restricts deletions to one end, usually the front. This allows enqueue from front or rear while limiting dequeue to the front, behaving somewhat like a stack for output but with queue-like input options. This variant is also known as a stack-ended queue or "steque."[10][11][2] Like its input counterpart, it was formalized in early data structure analyses to explore permutation generation and sorting behaviors.[2]
They also support greedy strategies in deque-based computations, where deletions are confined to minimal elements to approximate optimal sequences.[12]
These sub-types bridge queues and stacks by relaxing one operation's restrictions while enforcing the other, enabling hybrid behaviors that queues (strict FIFO) or stacks (LIFO) cannot provide without additional overhead.[11] In modern computing, they are less prevalent than full deques in high-level languages but remain relevant in resource-constrained environments, such as embedded systems or specialized algorithms, where operational limits reduce implementation complexity and memory footprint.[2]
Comparison with Other Data Structures
A double-ended queue, or deque, generalizes the queue data structure by permitting insertions and deletions at both the front and rear ends, whereas a standard queue enforces a strict first-in, first-out (FIFO) discipline that allows insertions only at the rear and removals only from the front.[13][2] Similarly, a deque extends the stack by supporting operations at both ends, in contrast to the stack's last-in, first-out (LIFO) restriction, which confines insertions and removals to a single top end.[13][2] This dual-end flexibility positions the deque as a superset of both queues and stacks, enabling it to emulate their behaviors while offering additional versatility.[2][1]
Compared to singly or doubly linked lists, a deque maintains an ordered sequence with constant-time access and modifications at the ends but lacks efficient operations for accessing or modifying elements in the middle, requiring linear-time traversal from an endpoint.[14][15] Linked lists, by contrast, support insertions and deletions throughout the structure in constant time given a direct pointer to the position, though locating arbitrary positions still demands O(n) traversal in the absence of indexing.[14] Thus, deques prioritize end-focused operations on ordered collections, while linked lists provide broader positional flexibility at the cost of potentially higher overhead for end-specific tasks.[15]
In relation to dynamic arrays or vectors, deques enable efficient insertions and deletions at both ends without the need for element shifting, which would otherwise impose linear-time costs on front-end operations in a standard dynamic array.[16][5] Dynamic arrays excel at amortized constant-time appends to the rear and random access via indexing, but front insertions necessitate shifting all elements, degrading performance.[17] Deques achieve their end efficiencies through specialized implementations, such as segmented arrays or doubly linked blocks, avoiding the contiguous storage pitfalls of dynamic arrays.[16]
Deques offer a balanced trade-off in flexibility, supporting a wider range of operations than specialized structures like queues or stacks, but they often incur higher constant factors due to the overhead of managing dual ends and potential non-contiguous storage.[18][19] This generality comes at the expense of simplicity and optimization tailored to single-end access in queues or stacks, or full positional control in linked lists.[18]
Deques are particularly suitable for scenarios requiring hybrid behaviors, such as algorithms that alternate between queue-like FIFO processing and stack-like LIFO reversal, or applications like sliding window computations where elements are added or removed from either end dynamically.[1][2]
Operations
Core Operations
A double-ended queue, commonly known as a deque, provides fundamental operations for adding, removing, and inspecting elements exclusively at its front and rear ends, enabling flexible FIFO and LIFO behaviors as needed.[15] These operations form the abstract interface of a deque, independent of underlying representations.[2]
Insertion operations include push_front, which adds an element to the front of the deque, and push_back, which adds an element to the rear.[15] For push_front, the high-level process involves checking if the deque is empty; if so, the new element becomes both the front and rear; otherwise, it is inserted immediately before the current front.[20] Similarly, push_back checks for emptiness and sets the new element as both ends if empty, or appends it after the current rear if not.[20]
Removal operations are pop_front, which removes and returns the element at the front, and pop_back, which removes and returns the element at the rear.[2] If the deque is empty, pop_front or pop_back typically throws an exception or returns a null indicator to signal the error condition.[15] After removal from a non-empty deque, the front or rear pointer updates to the next or previous element, respectively; if the deque becomes empty, both pointers may be set to null.[20]
Inspection operations, peek_front and peek_back (also known as front and back), allow viewing the element at the front or rear without modifying the deque.[15] These return the respective element if the deque is non-empty but throw an exception or return null if empty, mirroring the behavior of removal operations.[15]
The following pseudocode illustrates these operations at an abstract level, assuming a structure with front and rear pointers and a mechanism for element insertion/removal:
push_front(element):
if deque is empty:
set front to new element
set rear to new element
else:
insert new element before front
update front to new element
if deque is empty:
set front to new element
set rear to new element
else:
insert new element before front
update front to new element
push_back(element):
if deque is empty:
set front to new element
set rear to new element
else:
insert new element after rear
update rear to new element
if deque is empty:
set front to new element
set rear to new element
else:
insert new element after rear
update rear to new element
pop_front():
if deque is empty:
throw EmptyDequeException or return [null](/page/Null)
else:
temp = front
update front to next element
if front is now [null](/page/Null):
set rear to [null](/page/Null)
return temp
if deque is empty:
throw EmptyDequeException or return [null](/page/Null)
else:
temp = front
update front to next element
if front is now [null](/page/Null):
set rear to [null](/page/Null)
return temp
pop_back():
if deque is empty:
throw EmptyDequeException or return null
else:
temp = rear
update rear to previous element
if rear is now null:
set front to null
return temp
if deque is empty:
throw EmptyDequeException or return null
else:
temp = rear
update rear to previous element
if rear is now null:
set front to null
return temp
peek_front():
if deque is empty:
throw EmptyDequeException or return null
return front element
if deque is empty:
throw EmptyDequeException or return null
return front element
peek_back():
if deque is empty:
throw EmptyDequeException or return null
return rear element
if deque is empty:
throw EmptyDequeException or return null
return rear element
[20]
These core operations assume sequential, single-threaded access; concurrent modifications require additional mechanisms for thread safety, which are beyond the standard abstract definition.[2] In some deque variants, such as input-restricted or output-restricted types, certain operations may be prohibited, but the standard deque permits all four insertion and removal actions symmetrically.[2]
Advanced Operations and Constraints
In addition to core insertion and removal operations at the ends, double-ended queues support supplementary functions for querying and managing their state. The size operation returns the number of elements currently stored in the deque.[21] The isEmpty operation determines whether the deque contains any elements.[21] The clear operation removes all elements from the deque, effectively emptying it.[21]
Some deque implementations provide random access capabilities, enabling direct retrieval of elements by position relative to either end. For example, an operator or method such as at(n) or operator accesses the nth element from the front without bounds checking in the latter case.[21] Deques also facilitate iteration, allowing sequential access to elements from front to rear via forward iterators, or in reverse via reverse iterators.[21] Descending iterators support traversal from rear to front in certain designs.[23]
Variants of deques impose constraints on permitted operations to suit specific applications. In an input-restricted deque, insertions are allowed only at one end—typically the rear—while deletions can occur from both ends.[24] This restriction simplifies certain processing tasks, such as tree reductions, where elements are added sequentially but removed flexibly.[25]
An output-restricted deque, in contrast, permits insertions at both ends but limits deletions to one end, usually the front.[10] Known in some contexts as a stack-ended queue or steque, this variant generalizes queues while enforcing output discipline.[10] For instance, in an output-restricted deque, a pop operation from the rear is invalid, preventing removal from the insertion-only end opposite the front.[10]
Implementations
Array-Based Implementations
Array-based implementations of double-ended queues utilize dynamic arrays, often configured as circular buffers, to support efficient insertions and deletions at both ends without frequent element shifting. This approach employs two pointers—one for the front and one for the rear—to track positions within the array, leveraging modulo arithmetic to wrap around the array boundaries when necessary.[26]
In a circular buffer implementation, the array is treated as a ring where indices are computed using the modulo operator to simulate continuity. For instance, to perform a push_front operation, the front pointer is decremented by one and wrapped using modulo the array size; if the buffer is full, the array is resized before insertion. The following pseudocode illustrates a basic push_front for a bounded deque:
function push_front(element):
if size == capacity:
resize(capacity * 2)
front = (front - 1 + capacity) % capacity
array[front] = element
size += 1
function push_front(element):
if size == capacity:
resize(capacity * 2)
front = (front - 1 + capacity) % capacity
array[front] = element
size += 1
Similar logic applies to push_back by incrementing the rear pointer modulo capacity.[27][26]
To handle varying sizes efficiently, dynamic array resizing is employed: the capacity is typically doubled when the deque reaches full utilization to accommodate growth, and halved when the utilization falls below a threshold (e.g., 25%) to reclaim memory. This resizing strategy relies on amortized analysis, ensuring that occasional O(n) copy operations are offset by multiple O(1) insertions, yielding constant average time per operation.
These implementations offer advantages such as O(1) time complexity for random access to elements via direct indexing and improved cache locality due to contiguous memory storage. However, resizing can incur O(n) time in the worst case, though amortized to O(1), and may lead to temporary memory overhead during reallocation.
A notable example is the C++ standard library's std::deque, which enhances the basic array approach by using an array of fixed-size blocks (typically 512 bytes each) pointed to by a dynamic map array. This chunked structure balances efficient end operations with reasonable random access, minimizing large-scale shifts during insertions or deletions at the ends.[28]
Linked List Implementations
Doubly linked list implementations of deques utilize a sequence of nodes, where each node contains the data element along with pointers to the previous and next nodes, enabling efficient navigation and modification from both ends. To facilitate boundary operations and avoid special cases for empty lists, sentinel nodes—such as a header (pointing to the front) and a trailer (pointing to the rear)—are commonly employed at the beginning and end of the list. This structure maintains references to the head and tail for direct access, ensuring that insertions and deletions at either end involve only constant-time pointer adjustments.[6]
The core operations, such as push_front and push_back, exemplify the simplicity of this approach. For push_front(value), a new node is created with the given value; its next pointer is set to the current head (or the node after the header sentinel), and the current head's previous pointer is updated to point back to the new node; finally, the head reference is updated to the new node. If the deque was empty, the new node's previous pointer points to the header, and its next to the trailer. Pseudocode for this operation is as follows:
function push_front(value):
new_node = new [Node](/page/Node)(value)
new_node.next = head
if head is not null:
head.prev = new_node
head = new_node
if head.next is null: // was empty
tail = head
new_node.prev = null // or header [sentinel](/page/Sentinel)
function push_front(value):
new_node = new [Node](/page/Node)(value)
new_node.next = head
if head is not null:
head.prev = new_node
head = new_node
if head.next is null: // was empty
tail = head
new_node.prev = null // or header [sentinel](/page/Sentinel)
Similar pointer manipulations apply to push_back, pop_front, and pop_back, each achieving O(1) time complexity without amortization.[6]
Unlike array-based implementations that require resizing, linked list deques grow and shrink dynamically by allocating or freeing individual nodes as needed, eliminating the need for capacity management or preallocation. This results in no wasted space for unused slots, with overall space complexity of O(n where n is the number of elements.[6]
The primary advantages include true worst-case O(1) time for all end operations, as pointer updates are independent of list size, and efficient handling of dynamic sizes without contiguous memory constraints. However, disadvantages arise from non-contiguous memory allocation, leading to poor cache locality during sequential traversals, and the absence of efficient random access—reaching the i-th element requires O(n) traversal time from either end.[29]
An optimized real-world example is Python's collections.deque, which employs a doubly linked list of fixed-size blocks (typically 64 elements each) rather than single-element nodes; this hybrid reduces pointer overhead and improves cache performance while preserving O(1) end operations.[30]
Purely Functional Implementations
In purely functional implementations of double-ended queues (deques), the structures are persistent, such that every operation—such as insertion or deletion at either end—returns a new deque while sharing the unmodified portions of the original structure to minimize space and time overhead through structural sharing. This approach aligns with the immutability principles of functional programming languages, avoiding in-place mutations and enabling safe concurrent access without locks.[31]
A foundational technique is the Banker's deque, which represents the deque using two balanced streams: a front stream for efficient head access and a rear stream (often reversed) for tail access, maintaining the invariant that the front is at least as large as the rear to amortize costs. When the balance is violated, such as after multiple rear insertions, a rotation lazily rebuilds the front by appending the reversed rear, distributing the O(n) rebuilding cost over subsequent operations to achieve amortized O(1) time for cons, head, tail, snoc, last, and init. This method, analyzed using the banker's approach to potential functions, ensures linear total time for sequences of operations.[31]
Earlier work by Hood and Melville introduced the first purely functional real-time queue using global rebuilding, where the structure is periodically reconstructed from scratch to maintain balance, and this was extended to deques with similar guarantees for worst-case O(1) operations without amortization. Lazy rebuilding simplifies global rebuilding by deferring reconstruction until an imbalance is detected during access, leveraging lazy evaluation to incrementally compute changes and yielding amortized O(1) bounds while reducing implementation complexity. For real-time variants without laziness, Okasaki developed scheduling mechanisms that force a constant number of suspensions per operation, ensuring worst-case O(1) time for all core deque operations by proactively managing computation steps.[31]90030-2)
A prominent example is Haskell's Data.Sequence, implemented in the containers library using 2-3 finger trees annotated with sizes, which support amortized O(1) access to both ends and O(log n) splits and concatenations, generalizing deques to versatile sequences suitable for functional contexts. These implementations offer key advantages, including inherent thread safety due to immutability and compatibility with pure functional paradigms, where multiple versions of a deque can coexist without interference, facilitating reasoning about programs and enabling efficient parallelism.[32][33]
Language and Library Support
Built-in Language Support
In C++, the Standard Template Library provides std::deque as a built-in container in the <deque> header, implementing a double-ended queue that supports efficient insertion and deletion at both ends via methods such as push_front and push_back, along with constant-time random access to elements by index. For example, the following code demonstrates basic usage:
cpp
#include <deque>
#include <iostream>
int main() {
std::deque<int> dq;
dq.push_back(1); // Add to end
dq.push_front(0); // Add to front
std::cout << dq[0] << std::endl; // Access by index
return 0;
}
#include <deque>
#include <iostream>
int main() {
std::deque<int> dq;
dq.push_back(1); // Add to end
dq.push_front(0); // Add to front
std::cout << dq[0] << std::endl; // Access by index
return 0;
}
In Java, the java.util.Deque interface, introduced in Java 1.6, defines the double-ended queue operations including addFirst, addLast, removeFirst, and removeLast, with the array-based ArrayDeque class serving as the primary built-in implementation that provides resizable storage without capacity restrictions.[23][34] An example usage is:
java
import java.util.ArrayDeque;
import java.util.Deque;
public class Example {
public static void main(String[] args) {
Deque<Integer> dq = new ArrayDeque<>();
dq.addFirst(0); // Add to front
dq.addLast(1); // Add to end
System.out.println(dq.removeFirst()); // Remove from front
}
}
import java.util.ArrayDeque;
import java.util.Deque;
public class Example {
public static void main(String[] args) {
Deque<Integer> dq = new ArrayDeque<>();
dq.addFirst(0); // Add to front
dq.addLast(1); // Add to end
System.out.println(dq.removeFirst()); // Remove from front
}
}
Python includes collections.deque as a built-in type since version 2.4, offering thread-safe append and pop operations from both ends with methods like appendleft and append, as well as a rotate method for efficient shifting of elements.[35][36] A simple example:
python
from collections import deque
dq = deque()
dq.appendleft(0) # Add to left (front)
dq.append(1) # Add to right (end)
print(dq.popleft()) # Remove from left
from collections import deque
dq = deque()
dq.appendleft(0) # Add to left (front)
dq.append(1) # Add to right (end)
print(dq.popleft()) # Remove from left
Rust's standard library features std::collections::VecDeque, a double-ended queue built on a growable ring buffer for efficient operations at both ends, supporting push_front, push_back, pop_front, and pop_back.[37] Example code:
rust
use std::collections::VecDeque;
fn main() {
let mut dq: VecDeque<i32> = VecDeque::new();
dq.push_front(0); // Add to front
dq.push_back(1); // Add to back
println!("{}", dq.pop_front().unwrap()); // Remove from front
}
use std::collections::VecDeque;
fn main() {
let mut dq: VecDeque<i32> = VecDeque::new();
dq.push_front(0); // Add to front
dq.push_back(1); // Add to back
println!("{}", dq.pop_front().unwrap()); // Remove from front
}
Ada provides support through the Ada.Containers.Doubly_Linked_Lists package in the standard library since Ada 2005, which, while primarily a doubly-linked list, can be adapted for deque functionality using operations like Prepend and Append to insert at the beginning or end, and Delete_First and Delete_Last for removal.[38] For instance:
ada
with Ada.Containers.Doubly_Linked_Lists;
with Ada.Text_IO; use Ada.Text_IO;
procedure Deque_Example is
package Int_Lists is new Ada.Containers.Doubly_Linked_Lists (Element_Type => [Integer](/page/Integer));
use Int_Lists;
L : [List](/page/List) := Empty_List;
begin
Prepend (Container => L, New_Item => 0); -- Add to front
Append (Container => L, New_Item => 1); -- Add to end
Delete_First (Container => L); -- Remove from front
end Deque_Example;
with Ada.Containers.Doubly_Linked_Lists;
with Ada.Text_IO; use Ada.Text_IO;
procedure Deque_Example is
package Int_Lists is new Ada.Containers.Doubly_Linked_Lists (Element_Type => [Integer](/page/Integer));
use Int_Lists;
L : [List](/page/List) := Empty_List;
begin
Prepend (Container => L, New_Item => 0); -- Add to front
Append (Container => L, New_Item => 1); -- Add to end
Delete_First (Container => L); -- Remove from front
end Deque_Example;
Historically, early Lisp dialects supported deque-like structures through their fundamental list primitives, such as cons cells, which enabled efficient implementations of double-ended operations in symbolic processing tasks central to the language's design since the late 1950s.[39]
Third-Party and Standard Library Implementations
In the Go programming language, third-party libraries fill the gap for deque support, as the standard library provides only a doubly-linked list via container/list. A widely used implementation is the deque package from GitHub user gammazero, which offers a high-performance ring-buffer-based deque optimized for efficient insertions and removals at both ends, with O(1) amortized time complexity for core operations.[40] This package has been available since 2017 and supports generic types in modern Go versions, making it suitable for concurrent applications.
Swift's standard library lacks a built-in deque, but the official Swift Collections package, developed by Apple and released in 2021, includes the Deque type as part of its multiplatform collection enhancements. This implementation uses a ring buffer backend for random-access efficiency, supporting O(1) insertions and deletions at either end while maintaining array-like subscripting and range-replaceability.[41] As of 2025, it remains a core component of the package, with updates in version 1.3.0 improving integration with Foundation frameworks.[42]
JavaScript, lacking a native deque in its standard library, relies on third-party modules for robust implementations. The deque library by petkaantonov provides an extremely fast, low-level double-ended queue using a contiguous array with modular indexing, achieving near-optimal performance for push/pop operations at both ends without garbage collection overhead in Node.js environments.[43] Complementing this, Immutable.js's List structure implements deque semantics with full persistence, enabling O(log N) updates and efficient reversal or slicing, ideal for functional programming paradigms.[44] Another option is @datastructures-js/deque, a modern ES6-compatible package emphasizing type safety and benchmarked performance comparable to native arrays for small to medium sizes.[45]
In C, where the standard library offers no deque facilities, developers often resort to custom implementations using dynamic arrays or doubly-linked lists, or third-party options like the dque library from c-utils, which emulates the C++ std::deque API with support for LIFO, FIFO, and priority queue variants through a generic key-based structure.[46] This library, designed for embedded and systems programming, ensures portability and minimal dependencies.
Julia's ecosystem provides the Deque type via the DataStructures.jl package, an established third-party library for advanced data structures. Implemented as an unrolled linked list of blocks, it delivers amortized O(1) time for end insertions and removals, with efficient iteration and resizing for scientific computing workloads.
By 2025, Kotlin has expanded deque support beyond its standard ArrayDeque through libraries like kotlinx.collections.immutable, which introduces persistent, immutable collection interfaces suitable for concurrent and multiplatform applications. While not a direct deque, its PersistentQueue and related structures enable deque-like behaviors with thread-safe persistence, addressing needs in reactive and coroutine-based programming.[47]
Time Complexity
The time complexity of double-ended queue (deque) operations varies depending on the underlying implementation, such as doubly-linked lists or array-based structures, with Big O notation providing an asymptotic analysis of efficiency.[48]
In doubly-linked list implementations, insertions and deletions at both ends—such as push_front, push_back, pop_front, and pop_back—achieve O(1) worst-case time complexity, as these operations only require updating a constant number of pointers without traversing the structure.[48] In contrast, array-based implementations, including circular buffers, typically offer O(1) amortized time for these end operations, accounting for occasional resizing costs when the array capacity is exceeded.[49]
Random access to elements by index is O(1) in array-based deques, benefiting from direct array indexing, whereas it requires O(n time in linked list implementations due to the need to traverse from the head or tail.[48]
Insertions and deletions in the middle of a deque incur O(n time complexity across both array-based and linked list implementations, as they necessitate shifting or traversing a linear number of elements to maintain the structure.[48]
Resizing in array-based deques, often triggered by operations like push_front or push_back when capacity is full, results in amortized O(1) time per operation through methods such as the potential method or accounting argument, where infrequent O(n) rebuilds (e.g., copying elements to a doubled array) are spread across many O(1) steps.[49] For instance, in an array-based push_front, the occasional O(n) rebuild occurs only when resizing, but the average cost per insertion remains O(1) over a sequence of operations.[50]
In hybrid implementations, such as those using fixed-size blocks (e.g., Python's collections.deque with blocks of 64 elements), the choice of block size influences constant factors in time complexity but preserves the O(1) amortized bound for end operations by distributing resizing overhead.[50]
| Operation | Linked List (Worst-Case) | Array-Based (Amortized) |
|---|
| Push/Pop Front/Back | O(1) | O(1) |
| Random Access | O(n) | O(1) |
| Insert/Delete Middle | O(n) | O(n) |
This table summarizes the key time complexities for primary deque operations across implementations.[48][49]
Space Complexity
The space complexity of a double-ended queue (deque) is generally O(n, where n is the number of elements stored, across various implementations, as the structure must allocate space proportional to the elements it holds.[31] This includes the space for the elements themselves plus metadata for navigation and management. The total space can be expressed as S = n \times e + m, where e is the size of each element and m is the metadata overhead (such as pointers or indices).[48]
In a linked list implementation, the deque achieves exact O(n space usage, with each node typically requiring storage for the data element plus three pointers: one for the previous node, one for the next node, and potentially a reference to the data if not embedded.[51] On a 64-bit system, the pointer overhead alone contributes approximately 24 bytes per node (three 8-byte pointers), making this approach memory-intensive for small elements due to the fixed per-node cost, though it avoids pre-allocation waste.[52]
Array-based implementations also exhibit O(n space complexity but incur additional capacity overhead to support efficient insertions without frequent reallocations. These typically over-allocate storage, such as doubling the capacity (growth factor of 2) upon reaching limits, leading to temporary unused space that averages around 50% immediately after resizing in unbalanced workloads.[53] For example, in Java's ArrayDeque, the structure grows dynamically without fixed restrictions, but the buffer may hold up to twice the current size post-expansion, balancing amortized time efficiency against memory utilization.[54] Circular buffer variants within array deques minimize shifting but can waste up to 50% space during resizes if the new allocation is not immediately filled.[55]
Purely functional implementations maintain O(n) space complexity when balanced, using structures like streams or suspended lists to represent front and rear segments, with metadata such as length fields adding constant overhead per segment.[31] Unbalanced variants, such as simple cons-based rear additions without rotation, can lead to O(n log n) worst-case space in persistent contexts due to accumulated suspensions or deep nesting, though balancing techniques like periodic rotation restore O(n) usage.[31] Chris Okasaki's Banker's deques, for instance, enforce invariants like |F| ≥ |R| to ensure linear space via lazy evaluation.[31]
Optimizations like block-based designs reduce pointer overhead in practice; for example, Python's collections.deque employs a doubly-linked list of fixed-size blocks (each holding multiple elements contiguously), lowering the effective metadata cost compared to per-element pointers while preserving O(1) end operations.[56] This hybrid approach achieves near-array locality with linked flexibility, making it suitable for large n where pure linked lists would fragment memory excessively.[57]
Applications
Algorithmic Use Cases
Double-ended queues (deques) are particularly valuable in algorithmic contexts where efficient insertions and deletions from both ends enable optimized solutions to problems involving dynamic windows, symmetric comparisons, or prioritized traversals. Their ability to maintain order while supporting O(1) operations at either end makes them ideal for scenarios where stacks or queues alone would require additional overhead or restructuring.[58]
One prominent algorithmic use case is the sliding window maximum problem, where the goal is to find the maximum element in every contiguous subarray of fixed size k within an array of n elements. A naive approach would examine each window in O(k) time, leading to O(nk) overall complexity, but a deque-based method achieves O(n time by maintaining a deque of indices corresponding to potentially useful elements in decreasing order of their values. As the window slides, indices of elements outside the current window are removed from the front, and from the back, any indices with values smaller than the new element are discarded since they cannot be maxima in future windows. The front of the deque always holds the index of the current maximum, allowing O(1) retrieval per window.[59][60]
The following pseudocode illustrates the deque maintenance for the sliding window maximum:
function slidingWindowMaximum(arr, k):
deque = empty deque // stores indices
result = empty array
for i from 0 to length(arr)-1:
// Remove indices outside current window
if deque not empty and deque.front() == i - k:
deque.pop_front()
// Remove indices with smaller values from back
while deque not empty and arr[deque.back()] < arr[i]:
deque.pop_back()
deque.push_back(i)
// After first k-1 elements, start collecting results
if i >= k - 1:
result.append(arr[deque.front()])
return result
function slidingWindowMaximum(arr, k):
deque = empty deque // stores indices
result = empty array
for i from 0 to length(arr)-1:
// Remove indices outside current window
if deque not empty and deque.front() == i - k:
deque.pop_front()
// Remove indices with smaller values from back
while deque not empty and arr[deque.back()] < arr[i]:
deque.pop_back()
deque.push_back(i)
// After first k-1 elements, start collecting results
if i >= k - 1:
result.append(arr[deque.front()])
return result
This approach ensures each element is added and removed from the deque at most once, confirming the O(n) time complexity.[61]
Deques also facilitate efficient palindrome checking for strings or sequences, leveraging symmetric access to compare characters from both ends without reversing the structure. The algorithm enqueues all characters into the deque, then iteratively dequeues from the front and rear, verifying if they match; if any pair mismatches, the sequence is not a palindrome. This process runs in O(n) time and space, where n is the length, and demonstrates the deque's utility in problems requiring bidirectional symmetry.[58]
In graph algorithms, deques enhance breadth-first search (BFS) variants, particularly 0-1 BFS for finding shortest paths in unweighted or 0-1 weighted graphs, such as grid-based mazes where moves in certain directions incur zero or unit cost. Unlike standard BFS with a queue, 0-1 BFS treats the deque as a priority mechanism: nodes reached via cost-0 edges are pushed to the front (simulating zero increase in distance), while cost-1 edges push to the back, ensuring Dijkstra-like optimality in O(V + E) time without a full priority queue. This is especially effective in grid paths where alternating directions might correspond to different costs, maintaining level-like exploration while prioritizing cheaper paths.[62][63]
Deques further support undo/redo buffers in algorithmic simulations of history management, such as in text editors or command sequences, by pushing new states to one end for the forward history and popping from the same end for undo, while transferring popped elements to the opposite end for redo operations. This bidirectional management avoids the need for separate stacks, enabling efficient navigation through action histories with O(1) operations per action, bounded by the deque's capacity to prevent unbounded growth.[64][65]
Real-World and Emerging Applications
Double-ended queues (deques) play a crucial role in work-stealing algorithms for parallel processing, enabling efficient load balancing across threads. In Intel Threading Building Blocks (TBB), each thread maintains a local deque of tasks, allowing workers to push tasks onto one end and steal from the other end of idle threads' deques to minimize synchronization overhead and improve throughput in multicore environments.[66][67] Similarly, Java's ForkJoinPool employs per-worker deques to implement work-stealing, where forked tasks are pushed to the tail and joined from the head, with idle threads stealing from the tail of busy workers' deques to balance computational load dynamically.[68][69]
In user interface applications, deques facilitate browser history management for forward and backward navigation. Web browsers often use a deque to store visited URLs, adding new pages to the front upon navigation and removing from the back or front for back/forward operations, ensuring efficient access to recent history while maintaining a bounded size to limit memory usage.[70]
For concurrent programming, lock-free deques enable efficient multithreaded operations without traditional locks, reducing contention in high-throughput scenarios. Rust's crossbeam-deque library provides a work-stealing deque implementation using atomic operations and epoch-based reclamation, allowing safe push/pop from both ends across threads for building scalable schedulers and parallel runtimes.[71]
Historical Development
Origins of the Concept
The concept of the double-ended queue (deque) evolved from early generalizations of queue structures used in computer simulations during the 1950s, where basic queues modeled waiting lines in operations research and event-driven systems.
The term "deque" was coined in 1961 by E. J. Schweppe in the context of queueing theory, referring to a structure permitting insertions and deletions at both ends, as documented and analyzed by Donald E. Knuth in his foundational text on algorithms.[72]
In his 1968 book The Art of Computer Programming, Volume 1: Fundamental Algorithms, Donald E. Knuth introduced restricted deques—variants limiting operations to one end for insertion or deletion—as useful in various algorithmic contexts, including parsing.[73]
The deque's theoretical significance was further highlighted in its relation to automata theory, where a two-way finite automaton equipped with a deque as auxiliary storage achieves computational power equivalent to a Turing machine, bridging regular languages with full recursively enumerable sets through bidirectional access to the storage medium.
Key Publications and Evolutions
In 1995, Chris Okasaki published "Simple and Efficient Purely Functional Queues and Deques" in the Journal of Functional Programming, introducing amortized O(1) implementations of deques using lazy evaluation and scheduling techniques tailored for purely functional languages.[74] This work addressed the challenges of maintaining efficiency in immutable settings by balancing front and rear lists through incremental rebuilding.[74]
The third edition of The Art of Computer Programming, Volume 1: Fundamental Algorithms (1997) by Donald E. Knuth updated the analysis of deques as a key abstract data type within linear lists, emphasizing their utility in algorithms for sorting and searching through balanced representations and operations on both ends.[73] Knuth's analysis provided rigorous bounds on time and space complexities, establishing deques as foundational for sequential data processing.[73]
Building on these foundations, Haim Kaplan and Robert E. Tarjan advanced the field in 1999 with their paper "Purely Functional, Real-Time Deques with Catenation" in the Journal of the Association for Computing Machinery.[75] The authors developed worst-case O(1) real-time deques using lazy rebuilding and recursive slowdown, enabling efficient catenation and supporting persistent variants without amortization.[75]
The 2000s marked a shift toward concurrent deques to meet demands in parallel and multiprocessor systems, extending earlier non-blocking queue designs. For instance, Maged M. Michael's 2003 paper "CAS-Based Lock-Free Algorithm for Shared Deques" presented the first practical lock-free deque using compare-and-swap operations, achieving linearizability and scalability for multi-threaded environments.[76] This built on his prior work with Michael L. Scott on concurrent queues, adapting techniques like pointer swinging for bidirectional access.[77] In 2000, David L. Detlefs, Christine H. Flood, Alexander T. Garthwaite, Paul A. Martin, Nir Shavit, and Guy L. Steele Jr. improved DCAS-based concurrent deques in "Even Better DCAS-Based Concurrent Deques," reducing complexity while preserving non-blocking progress.[78]
These key publications have profoundly influenced deque theory, enabling amortized and worst-case efficient structures in functional programming and lock-free concurrency for parallel computing. Recent work, such as verified implementations of catenable deques (as of 2024), continues to build on these foundations.[79][75][76]