Fact-checked by Grok 2 weeks ago

Linked list

A linked list is a linear in where elements, known as , are connected sequentially through pointers or references, with each typically storing and a link to the next , allowing non-contiguous allocation unlike arrays. Accessed via a head pointer, this structure supports dynamic resizing and efficient insertions or deletions at arbitrary positions without shifting elements, though it requires linear time for . Linked lists come in several variants to suit different needs. A singly linked list features unidirectional links from each node to the next, forming a chain ending in a . In contrast, a doubly linked list includes bidirectional links, with each node pointing both forward and backward, enabling traversal in either direction and easier deletion but at the cost of additional memory per node. Circular linked lists connect the last node back to the first, creating a without a , which is useful for applications requiring continuous cycling like . Key operations on linked lists include insertion, deletion, traversal, and search, often implemented in time for head or tail modifications in appropriate variants. Advantages encompass flexible size adjustment without preallocation and O(1) insertion/deletion at known positions, making them ideal for scenarios with frequent structural changes. However, disadvantages include higher overhead due to pointer , lack of direct indexing leading to O(n) access time, and potential fragmentation from scattered allocation. Linked lists underpin implementations of other abstract data types, such as stacks, queues, and deques, and find applications in file systems, music playlists, and representations where order matters but is infrequent. They remain a foundational concept in algorithms and data structures education, emphasizing .

Fundamentals

Definition and Nomenclature

A linked list is a linear consisting of a of elements, known as , where each stores a value and a to the next in the , allowing the elements to be connected dynamically without requiring contiguous allocation. This structure relies on pointers or —mechanisms in programming languages that store addresses to enable indirect access to —as the fundamental building blocks for linking . In standard nomenclature, a "" is the basic unit of a linked list, typically comprising two fields: a field to hold the actual (such as an , , or object) and a next field serving as a pointer to the subsequent . The "head" refers to the pointer or reference to the first in the list, providing the for traversal; if the list is empty, the head points to . The "" denotes the last , whose next pointer is set to , marking the end of the sequence and preventing further traversal. The value acts as a , indicating the absence of a subsequent and distinguishing non-empty lists from the empty case. A common way to represent a node in pseudocode, using a C-like structure, is as follows:
c
struct Node {
    int data;  // Example data field (can be any type)
    struct Node* next;  // Pointer to the next node
};
This illustrates the essential components: the data payload and the linking reference, with the last node's next field assigned null to terminate the list.

Singly Linked List

A singly linked list is a type of linear consisting of a of , where each stores a and a single pointer (or reference) to the next in the . The list begins with a head pointer referencing the first , and the final contains a to signify the end of the list. This unidirectional linking allows traversal only in the forward direction, from head to tail. In terms of memory layout, nodes in a singly linked list are typically allocated dynamically and stored in non-contiguous locations in , with pointers serving as the mechanism to connect them logically into a chain. This contrasts with contiguous structures like arrays, enabling efficient insertion and deletion without shifting elements, though it requires additional space for the pointers themselves. The primary advantages of a singly linked list include its simplicity in implementation, as only one pointer per is needed, leading to lower memory overhead—typically half the pointer storage compared to structures requiring bidirectional links. This makes it suitable for scenarios where forward-only access suffices and memory efficiency is prioritized. However, a key disadvantage is the inability to traverse backward efficiently; accessing a requires restarting from the head and iterating forward, resulting in time complexity in the worst case for such operations. For illustration, consider pseudocode to create a singly linked list with three nodes containing values 1, 2, and 3:
class Node {
    int data;
    Node next;
}

Node createList() {
    Node head = new Node();
    head.data = 1;
    head.next = new Node();
    head.next.data = 2;
    head.next.next = new Node();
    head.next.next.data = 3;
    head.next.next.next = null;
    return head;
}
This example initializes the head node, links subsequent nodes, and sets the tail's next pointer to null, forming the chain head → 1 → 2 → 3 → null.

Doubly Linked List

A doubly linked list extends the singly linked list by incorporating bidirectional pointers in each , enabling traversal in both forward and backward directions. Each typically consists of three fields: the , a pointer to the next in the sequence, and a pointer to the previous . The head 's previous pointer is set to , and the tail 's next pointer is set to , maintaining the linear structure while allowing efficient access from either end. The following illustrates a basic in C-like :
c
struct [Node](/page/Node) {
    [int](/page/INT) [data](/page/Data);           // The [data](/page/Data) stored in the [node](/page/Node)
    struct [Node](/page/Node)* next;  // Pointer to the next [node](/page/Node)
    struct [Node](/page/Node)* prev;  // Pointer to the previous [node](/page/Node)
};
This contrasts with singly linked by adding the previous pointer, which facilitates operations that require backward navigation without restarting from the head. Doubly linked lists offer several benefits, including simpler deletion of a node when a direct pointer to it is available, as the previous and next nodes can be directly relinked in constant time. Reversal of the entire list can be performed in O(n) time by iterating through the list and swapping the next and prev pointers in each node. However, these advantages come at the cost of increased memory overhead, with each node requiring an extra pointer compared to a singly linked list, approximately doubling the space for pointers. In terms of space complexity, a doubly linked list storing n elements requires O(n) space overall, as each of the n nodes holds the data value plus two pointers (typically 8 bytes each on a 64-bit system, excluding data size). This makes it suitable for applications where bidirectional access justifies the additional storage, such as in browser history implementations or undo/redo mechanisms in editors.

Variants

Circular Linked List

A circular linked list is a variant of the linked list data structure in which the last node connects back to the first node, creating a closed loop that enables continuous traversal without a defined end. This structure eliminates null terminators, with the tail node's next pointer referencing the head in a singly linked variant, or the head node's previous pointer referencing the tail in a doubly linked variant. Unlike linear linked lists, which terminate with a null pointer, circular lists form a cycle that simplifies certain operations involving repeated access to elements in sequence. There are two primary types of circular linked lists: singly circular, which allows unidirectional traversal along the loop via next pointers, and doubly circular, which supports bidirectional traversal using both next and previous pointers. In a singly circular list, each node contains data and a single next pointer, with the loop maintained by setting the tail's next to the head. For doubly circular lists, nodes include both next and previous pointers, ensuring the head's previous points to the tail and the tail's next points to the head, providing symmetry for forward and backward navigation. Circular linked lists are particularly efficient for representing cyclic data patterns, such as in algorithms within operating systems, where processes are managed in a repeating to ensure fair . For instance, in input-queued switches, deficit round-robin scheduling employs circular linked lists to through ports systematically, avoiding the need to reset pointers after reaching the end. Other applications include modeling computer networks, where nodes represent interconnected devices in a without a natural starting or ending point. Traversal in a circular linked list begins at any , typically the head, and continues until returning to the starting point to visit all elements. For a singly circular list, the process can use a do-while : initialize current to head, then do { process current's data; current = current.next; } while (current != head). This ensures all are visited, including in single- cases, and prevents infinite traversal by checking return to start. As an example of maintaining the during insertion in a singly circular list, consider adding a new at the end. The following illustrates the process, assuming an existing list with a head pointer:
if head is null:
    create new node
    new.next = new
    head = new
else:
    temp = head
    while temp.next != head:
        temp = temp.next
    create new node
    temp.next = new
    new.next = head
This ensures the new node integrates seamlessly into the cycle without breaking the connection to the head.

Multiply Linked List

A multiply linked list is a variation of the linked list where each contains two or more link fields, enabling the same set of data records to be traversed in multiple different orders or along various dimensions. This extends the linear connectivity of simpler singly or doubly linked lists to support more complex relationships. In terms of , a common implementation is the two-dimensional multiply linked list used for representing sparse matrices, where non-zero elements are stored as with pointers for row-wise and column-wise traversal. Each typically holds the row index, column index, data value, a pointer to the next in the same row (e.g., right), and a pointer to the next in the same column (e.g., down). For instance, the following C-like illustrates a basic :
c
struct Node {
    int row;
    int col;
    int data;
    struct Node* right;  // Next node in the same row
    struct Node* down;   // Next node in the same column
};
This setup allows efficient storage and access to sparse data without allocating space for zero elements. Applications of multiply linked lists include efficient representation of sparse matrices, where traversal along specific rows or columns can be performed in O(1) time per link follow, and graph structures requiring multi-directional connectivity. They have also been employed in database systems for indexing to facilitate multi-way access to records, such as reordering rows in column stores for compression and query optimization. The primary trade-offs involve increased memory usage, as each node requires additional space for multiple pointers—typically doubling or more compared to a singly linked list—and heightened implementation complexity in managing traversals and updates across links without introducing cycles or inconsistencies.

Other Specialized Variants

List handles provide an abstract to the root of a linked list, allowing dynamic manipulation such as insertion or deletion without directly exposing the internal to the user. By encapsulating the head pointer and related within a object, this variant supports safer operations in object-oriented systems and facilitates collection or . Hybrid structures combine linked lists with alternative techniques to optimize performance. Skip lists augment a singly linked list by adding multiple levels of forward pointers at exponentially increasing intervals, enabling faster search through probabilistic layer selection. A simple skip list node might be represented in pseudocode as:
class [SkipListNode](/page/SkipListNode) {
    [int](/page/INT) value;
    [SkipListNode](/page/SkipListNode)* next[MaxLevels];  // Array of pointers for different levels
    [int](/page/INT) level;  // Highest level for this node
}
Unrolled linked lists group multiple elements into each node as a small contiguous , followed by a pointer to the next such block, which improves locality and reduces the number of pointers needed compared to traditional singly linked lists. These specialized variants trade off increased structural complexity for gains in specific operations; for instance, skip lists achieve expected O(log n) search time versus O(n) for basic linked lists, but insertions require level computation and pointer updates across multiple layers, raising average complexity. Similarly, unrolled lists enhance traversal speed through better patterns at the expense of more involved insertion and deletion logic within fixed-size blocks.

Structural Elements

Sentinel Nodes

Sentinel nodes, also known as dummy or placeholder nodes, are specially designated nodes in a linked list that do not store actual data but serve to mark boundaries and simplify structural management. These nodes are typically positioned at the front (header sentinel), rear (trailer sentinel), or both ends of the list, with their pointers configured to connect to the first or last real node, thereby providing a consistent structure regardless of the list's size. In this way, a acts as a traversal terminator or boundary marker, avoiding the need to handle null pointers directly in many cases. The primary types of sentinel nodes include front sentinels for singly linked lists, which precede the first data , and rear sentinels for lists where operations at the end are frequent. In doubly linked lists, both header and trailer s are commonly used, with the header's next pointer linking to the first and its previous pointer set to , while the trailer's previous points to the last and its next to . This dual-sentinel approach ensures bidirectional navigation without special boundary checks. One key benefit of sentinel nodes is the elimination of during operations, such as checking for empty lists or insertions at the beginning or end, which streamlines code implementation and reduces errors. For instance, an empty list can be represented simply by having the sentinel's pointers reference itself or , maintaining uniformity with non-empty lists. This design promotes cleaner, more robust algorithms by treating boundaries as regular nodes. In , a singly linked list with a head might be initialized as follows:
class [Node](/page/Node):
    data = None
    next = None

[sentinel](/page/Sentinel) = [Node](/page/Node)()  # No data stored
[sentinel](/page/Sentinel).next = None  # Points to first real node or None for empty list

# To insert a new node at the front:
new_node = [Node](/page/Node)()
new_node.data = value
new_node.next = [sentinel](/page/Sentinel).next
[sentinel](/page/Sentinel).next = new_node
This structure illustrates how the simplifies linking without checks at the head. Despite these advantages, nodes introduce a minor memory overhead, as each occupies space for pointers (and possibly other ) without contributing to , though this cost is typically negligible for most applications.

Empty Lists

In linked lists, an empty list is conventionally represented by setting the head pointer to , signifying that no have been allocated or linked together. This approach ensures that the list structure requires zero memory for in the absence of elements, distinguishing it from non-empty states where the head points to the first . Detection of an empty list is straightforward and involves a simple null check on the head pointer, which serves as the primary indicator for emptiness in standard implementations. A common utility function for this purpose is the isEmpty method, exemplified in the following pseudocode:
boolean isEmpty(Node head) {
    return head == null;
}
This check is efficient, operating in constant time, and is essential for initializing lists or verifying state before further operations. Operations on an empty linked list are designed to handle the head gracefully to maintain robustness. For insertion, the process creates the first and assigns it directly to the head pointer, effectively transitioning the list from empty to containing a single element without additional linking steps. In contrast, deletion attempts on an empty list typically return immediately or signal an error, as there are no to remove. Traversal operations similarly terminate without , avoiding any processing since no starting exists. A key consideration in implementing empty list handling is preventing dereferences, which can lead to runtime errors; this requires explicit checks before dereferencing the head in traversal, search, or modification routines. As an alternative to null-based representation, nodes can simplify empty list management by providing a dummy that eliminates special-case checks.

List Handles

In data structures, a list handle refers to an or object that encapsulates the head pointer (and optionally the tail pointer) of a linked list, serving as the primary identifier and access point for the structure without exposing the underlying node details. This allows users to manipulate the list through a controlled , preventing direct access to internal pointers that could lead to errors like dangling references or invalid modifications. The use of list handles promotes encapsulation, a key principle in , by bundling the list's state and operations within a single entity, thereby hiding implementation specifics from client code. This approach simplifies , as the handle manages dynamic allocation and deallocation of nodes, reducing the risk of manual pointer errors in languages without built-in collection. For instance, in , a List class typically declares a private Node reference for the head, ensuring that all interactions occur via methods like add or remove, which maintain the list's integrity. Operations on the linked list are exclusively performed through methods provided by the , such as insertion at the head or , which internally update the encapsulated pointers while keeping the linking mechanism hidden from the user. This design not only enforces safe usage but also facilitates easier maintenance and extension of the . In environments with automatic garbage collection, list handles play a crucial role by acting as roots during the collection process, enabling the to and reclaim unreferenced nodes linked from the , thus preventing leaks in complex structures. This integration ensures that as long as the remains reachable, its associated nodes are preserved, and upon the handle becoming unreachable, the entire of nodes can be efficiently collected.

Operations

Insertion and Deletion

Insertion operations in a singly linked list involve creating a new and updating pointers to incorporate it into the structure, with efficiency varying by location. Inserting at the head is constant time, requiring only the creation of a new whose next pointer points to the current head, followed by updating the head to the new . This approach handles the empty list case seamlessly by setting the head directly to the new if the list was previously empty. For insertion at the tail or a specific position, traversal from the head is necessary to reach the appropriate point, resulting in worst-case where n is the number of s. To insert after a specific p, the algorithm sets the new 's next pointer to p's current next, then updates p's next to the new ; if inserting at the , p is the last found via traversal. In the empty list case, this defaults to head insertion. for insertion after p (assuming p is not ) is as follows:
create new_node with [value](/page/Value)
new_node.next = p.next
p.next = new_node
If the list is empty, create the new and set head = new_node. Deletion in a singly linked requires locating the to remove and updating the previous 's next pointer to skip it, also incurring O(n) worst-case time due to the for the target in deletions by or . For deletion by or , first traverse to find the previous q pointing to the target x; then set q.next = x.next. Head deletion is a special case: if x is the head, set head = head.next; if the becomes empty, set head to null. for deleting x (assuming previous q is known) is:
q.next = x.next
// Free x if necessary
For head deletion when head = x:
head = head.next
In the empty list case, no action is taken as there is no node to delete. These operations assume access to the previous node, which requires traversal unless inserting or deleting at the head. In doubly linked lists, deletion can be O(1) with direct access to the node via its previous pointer. Traversal of a singly linked list begins at the head and proceeds linearly by following each 's next pointer until reaching a , allowing access to all elements in sequence. This process is fundamental for examining or list contents without modification. The standard iterative for traversal is as follows:
current = head
while current ≠ [null](/page/Null) do
    process(current.data)
    current = current.next
end while
This ensures each is visited exactly once, with a of , where n is the number of nodes. Common applications of traversal include computing the list's length or identifying the maximum value among elements. For length calculation, a counter is incremented during the iteration:
length = 0
current = head
while current ≠ null do
    length = length + 1
    current = current.next
end while
return length
This operation requires examining every node, confirming the O(n) complexity. Similarly, to find the maximum value in a list of comparable elements (assuming the list is non-empty):
max_value = head.data
current = head.next
while current ≠ null do
    if current.data > max_value then
        max_value = current.data
    end if
    current = current.next
end while
return max_value
Both examples rely on the same linear scan, processing data at each step without additional space beyond a few variables. Search operations in linked lists locate a node containing a specific target value by performing a sequential scan from the head, comparing data fields until a match occurs or the end is reached. This yields a node pointer upon success or an indication of failure (e.g., null), with worst-case time complexity of O(n) due to potential full traversal. Pseudocode for value-based search is:
current = head
while current ≠ null do
    if current.data = target then
        return current
    end if
    current = current.next
end while
return null  // target not found
To return an index instead, a counter can be maintained and incremented alongside the traversal. This linear search is straightforward but inefficient for frequent queries compared to indexed structures. In circular linked lists, where the last node's next pointer references the head, standard traversal loops risk infinite iteration unless modified. A typical adjustment uses a do-while loop that processes nodes until the current pointer returns to the initial starting node, ensuring each element is visited once:
if head = null then return
start = head
current = head
repeat
    process(current.data)
    current = current.next
until current = start
For search in circular lists, the loop similarly terminates upon returning to the start if the target is absent, preventing endless cycling while maintaining complexity. While basic traversal and search are inherently linear, optimizations like hashing can accelerate lookups by indexing into linked list buckets, though such enhancements fall outside core list operations.

Additional Operations

Linked lists support various higher-level operations that manipulate the structure as a whole, such as , , copying, and . These operations are essential for applications requiring dynamic reconfiguration of list order or combination without excessive overhead. of a singly linked list rearranges the nodes so that the last node becomes the first, reversing the of all links. The standard iterative employs three pointers—previous (initialized to ), current (starting at the head), and next—to traverse the list in O(n) time while using O(1) extra space, by repeatedly storing the next node, redirecting the current node's next pointer to previous, and advancing the pointers. A recursive approach achieves the same complexity by reversing the tail recursively and then attaching the original head as the new tail, though it uses O(n) stack space due to the depth. Here is pseudocode for the iterative reversal:
function reverseList(head):
    prev = null
    current = head
    while current != null:
        nextTemp = current.next
        current.next = prev
        prev = current
        current = nextTemp
    return prev
This method swaps pointers by temporarily storing the next reference before updating, ensuring no links are lost during traversal. Concatenation combines two singly linked lists by linking the tail of the first to the head of the second, an O(1) operation if the tail of the first list is already known; otherwise, locating the tail requires an initial O(m) traversal where m is the length of the first list. Copying a linked list produces a deep copy by traversing the original list once, creating new nodes with identical data values, and linking them in the same order, incurring O(n) time and space complexity for a list of n nodes. Sorting a linked list can be efficiently performed using an adaptation of merge sort, which suits linked structures due to easy splitting and merging without index shifts. The algorithm operates in O(n log n) time by recursively dividing the list at its midpoint (found via a two-pointer slow-fast traversal), sorting each half independently, and then merging the sorted halves by iteratively comparing and linking the smallest node from each. This bottom-up or top-down process ensures stability and handles the lack of random access inherent in linked lists.

Trade-offs

Linked Lists vs. Dynamic Arrays

Linked lists and dynamic arrays (also known as resizable arrays or vectors) represent two fundamental approaches to implementing linear structures, each with distinct trade-offs in space and time efficiency. Linked lists consist of nodes where each stores and a reference to the next node, enabling dynamic growth without preallocation. In contrast, dynamic arrays maintain elements in a contiguous block of memory that can be resized as needed, typically by allocating a larger and copying elements when capacity is exceeded. These differences lead to varying performance characteristics depending on the operations performed.

Space Efficiency

Linked lists require additional space for pointers or references in each node, resulting in an O(n) overhead proportional to the number of elements, where each pointer typically consumes 4 or 8 bytes depending on the system architecture. This overhead can be significant for small elements, as each allocates separately in the . Dynamic arrays, however, store elements contiguously without per-element pointers, achieving better space locality and minimal overhead per element, though they may temporarily waste space equal to up to half the current capacity during resizing phases. As a rule, linked lists are more space-efficient for lists with highly variable sizes where exact allocation is preferred, while dynamic arrays suit scenarios with predictable or stable sizes to minimize wasted space from over-allocation during resizing. For instance, if elements are large (e.g., structs with substantial fields), the pointer overhead in linked lists becomes negligible relative to the size.

Time Complexity

The core time trade-offs arise from access patterns and modifications. to an by is O(1) in dynamic arrays due to direct calculation from the base address, but O(n) in linked lists, requiring traversal from the head. Insertion and deletion at known positions (e.g., head or tail, or given a direct node pointer) are O(1) in linked lists, as they involve only pointer adjustments without shifting elements. In dynamic arrays, insertions or deletions in the middle or beginning require O(n) time to shift subsequent elements, though appends to the end are amortized O(1) due to occasional resizing. The following table summarizes average-case time complexities for common list operations, assuming a singly linked list without pointer and a with doubling resize strategy:
OperationDynamic ArrayLinked List
Access by indexO(1)O(n)
Insert at beginningO(n)O(1)
Insert at endAmortized O(1)O(n) (O(1) with )
Insert at arbitrary positionO(n)O(1) (if position known)
Delete at beginningO(n)O(1)
Delete at endAmortized O(1)O(n) (O(1) with )
Delete at arbitrary positionO(n)O(1) (if position known)
Search (unsorted)O(n)O(n)
These complexities highlight that dynamic arrays favor cache-friendly random access and sequential operations, while linked lists excel in frequent structural changes without relocation.

When to Choose Each Structure

Dynamic arrays are ideal for applications requiring fast lookups or iterations over the entire structure, such as in many implementations (e.g., Java's ArrayList or C++'s std::vector), where O(1) access dominates. Linked lists are better suited for scenarios with frequent insertions and deletions at non-end positions, avoiding the shifting costs of arrays, particularly when the list size is unknown or fluctuates widely. A brief reference to internal variants: singly linked lists suffice for forward-only operations, while doubly linked lists add space for backward pointers but enable O(1) end deletions with a tail pointer, as the previous link allows direct updates without traversal.

Example: Queue Implementation

Consider implementing a , where enqueue adds to the rear and dequeue removes from the front. A linked list achieves O(1) time for both operations by maintaining head and tail pointers, with no element shifting required. In a , enqueue is amortized O(1) via resizing, but dequeue necessitates O(n) shifting of all elements to fill the front gap, making it inefficient for large queues unless using a variant. This demonstrates linked lists' advantage in preserving operation efficiency for dynamic workloads like or task scheduling.

Amortized Analysis

Dynamic arrays' resizing incurs occasional O(n) costs when capacity is exceeded, but by doubling the size each time (e.g., from 2^k to 2^{k+1}), the total cost over m insertions is O(m), yielding amortized per operation via the —charging extra "credits" during cheap inserts to cover resizes. Linked lists avoid such bursts, with truly constant per insertion or deletion at endpoints, though traversal remains linear without indexing support. This amortized efficiency makes dynamic arrays preferable for append-heavy workloads, as verified in standard analyses.

Singly vs. Doubly Linked

A singly linked list consists of s where each contains and a single pointer to the next in the sequence, enabling unidirectional traversal from the head to the tail. In contrast, a includes an additional pointer in each to the previous , allowing bidirectional traversal and more flexible navigation. This structural difference impacts both memory usage and , with the choice between them depending on the application's requirements for directionality and performance. Doubly linked lists require approximately twice the pointer storage per node compared to singly linked lists, as each stores both next and previous pointers, leading to higher memory overhead—typically an extra pointer's worth of space (e.g., 8 bytes on a 64-bit ) per node. This extra space enables direct access to adjacent nodes in both directions without additional searches, which is beneficial when frequent backward operations are needed, but it can be prohibitive in memory-constrained environments where singly linked lists suffice for forward-only access. In terms of , both structures support O(1) insertion and deletion at the head if a head pointer is maintained. However, operations at the or in the middle differ significantly: singly linked lists require time for tail insertions or deletions due to the need to traverse from the head to locate the last (unless a pointer is added, which complicates updates), while doubly linked lists achieve O(1) for these with direct previous/next access. Traversal and search remain for both in the worst case, but doubly linked lists allow reverse traversal in without restarting from the head. The following table summarizes key operation complexities, assuming access to the head (and where applicable) and no direct pointer for middle operations:
OperationSingly Linked List
Insert at headO(1)O(1)
Insert at tailO(n)O(1)
Delete at headO(1)O(1)
Delete at tailO(n)O(1)
Insert/delete in middle (with node pointer)O(1) for insert; O(n) for delete (needs prev)O(1)
Traversal (full)O(n) (forward only)O(n) (bidirectional)
SearchO(n)O(n)
These complexities are derived from standard implementations where middle deletions in singly linked lists necessitate finding the previous node via linear search. Singly linked lists are ideal for applications requiring only forward traversal, such as implementing stacks where push and pop operations occur at one end, minimizing memory use and simplifying node structure. Doubly linked lists excel in scenarios needing bidirectional navigation, like web browser history, where users frequently move back and forth between pages, allowing O(1) updates to previous and next links without rescanning the entire history. For instance, in browser implementations, each history entry acts as a node, with forward and backward buttons leveraging the previous and next pointers directly. When bidirectional traversal is unnecessary, a singly linked list provides a sufficient and more memory-efficient alternative, avoiding the overhead of unused previous pointers in purely sequential applications. This hybrid consideration highlights that doubly linked lists should be selected only when the gains in reverse operations justify the increased and .

Linear vs. Circular Linking

In a linear linked list, nodes form a chain where the final node's next pointer is set to null, clearly demarcating the end of the sequence. Conversely, a circular linked list connects the last node's next pointer back to the first node (the head), creating a continuous without a terminating value. This topological difference fundamentally alters how the structures are traversed, modified, and applied, with circular variants suiting cyclic processes while linear ones handle straightforward sequences. Traversal in linear linked lists involves iterating through next pointers until reaching the null terminator, which provides a natural stopping condition but requires a null check at each step. Circular linked lists eliminate this check by design, enabling fluid, continuous traversal from any node, but programmers must store a reference to the starting node to halt after one full cycle; failure to do so, especially if the loop is corrupted, can lead to infinite iteration. The major advantage of circular traversal lies in reducing special-case handling for end-of-list conditions during operations like queue management. Insertion and deletion in linear linked lists are relatively simple at the ends, as adding a at the tail involves setting its next to null, and removal updates the previous 's pointer accordingly. In circular linked lists, these operations demand careful maintenance of the loop: inserting at the end requires linking the new 's next to the head and updating the prior last 's next, while deletion involves bypassing the and reconnecting the prior to the subsequent one, often needing traversal to locate positions and ensuring the last-to-first link remains intact. This added complexity can increase implementation errors but supports seamless cyclic updates. Memory overhead is essentially identical for singly linked implementations of both, as each allocates space for and one next pointer; the linear form stores a value in the , while the circular form stores a pointer to the head, yielding no net difference in per-node storage. Circular linking can be implemented atop either singly or doubly linked bases, though it amplifies the pointer updates in doubly linked cases. Linear linked lists suit most general-purpose sequences, such as task queues or file records, where a defined beginning and end suffice. Circular linked lists excel in scenarios requiring perpetual cycling, including the —where people in a circle are sequentially eliminated, solvable efficiently by simulating eliminations around the loop—and music playlists that repeat indefinitely without resetting to a head. To detect unintended loop closure or cycles in a list presumed linear, Floyd's cycle detection algorithm employs two pointers: a "tortoise" advancing one step at a time and a "hare" advancing two, meeting if a cycle exists, with O(n) time and O(1) .

Impact of Sentinel Nodes

Sentinel nodes significantly simplify the implementation of linked list operations by reducing the number of conditional statements required to handle edge cases, such as insertions or deletions at the boundaries. Without sentinels, code for operations like inserting at the head often requires explicit checks for an empty list (e.g., if (head == null)), leading to duplicated logic and increased complexity. By placing a dummy sentinel node at the beginning (and optionally at the end), these checks become unnecessary, allowing uniform treatment of all positions. This approach streamlines algorithms, making them more readable and less prone to errors in boundary conditions. In doubly linked lists, sentinel nodes establish clear boundaries, with a header sentinel pointing to the first real node and a trailer sentinel linked from the last, which aids in bidirectional traversals and simplifies removal operations by always providing valid previous and next pointers. For circular linked lists, sentinels act as fixed start and end references, eliminating the need for separate head and tail pointers while preventing infinite loops during traversal; the last node links back to the sentinel, maintaining the loop structure without special null-handling. This uniformity extends to variants, where sentinels ensure consistent behavior across list types. The primary overhead of sentinel nodes is the allocation of 1-2 extra nodes per list, each containing pointers but no data, which adds a constant memory cost that is negligible for large lists but can accumulate in scenarios involving numerous short lists, such as one per document word. Initialization requires upfront allocation and linking of these nodes, introducing a minor setup cost, and in memory-constrained embedded systems, this fixed overhead may be undesirable compared to null-terminated lists. Despite these drawbacks, the code simplification often outweighs the costs in non-critical memory applications. To illustrate the impact on code uniformity, consider a example for inserting a at the beginning of a singly linked list. Without a :
[function](/page/Function) insertAtHead(value):
    newNode = new Node(value)
    if head == [null](/page/Null):
        head = newNode
    else:
        newNode.next = head
        head = newNode
With a (where sentinel.next initially points to ):
function insertAtHead(value):
    newNode = new Node(value)
    newNode.next = sentinel.next
    sentinel.next = newNode
The sentinel version eliminates the conditional branch, applying the same logic regardless of list emptiness.

Implementations

Language Support

In the C programming language, linked lists lack built-in support and must be implemented manually using pointers to manage nodes dynamically, allowing for flexible memory allocation but requiring explicit handling of memory deallocation to avoid leaks. In contrast, C++ provides the std::list container in its Standard Template Library (STL), introduced in the C++98 standard, which implements a doubly-linked list for efficient insertions and deletions at arbitrary positions while supporting bidirectional iteration. Java offers native support through the java.util.LinkedList class, part of the Java Collections Framework since Java 1.2, which serves as a doubly-linked list realizing both the List and Deque interfaces, enabling all optional list operations including null elements and providing O(1) access for appends and removes at both ends. Python's standard library includes collections.deque in the collections module, introduced in Python 2.4, which functions as a double-ended queue optimized for appends and pops from either end with O(1) performance; its underlying implementation consists of a doubly-linked list of fixed-size blocks to balance memory efficiency and speed. Other languages integrate linked list concepts more natively; for instance, Lisp dialects like treat cons cells as the fundamental building blocks for lists, forming singly-linked structures where each cons cell pairs a value with a pointer to the next cell, enabling seamless list manipulation as a core language feature. Similarly, Go's container/list package, available since Go 1.0, supplies a doubly-linked list type with methods for pushing, popping, and iterating elements, designed for general-purpose use in concurrent-safe contexts when properly synchronized. In low-level languages such as C, the absence of built-in abstractions necessitates explicit node management via pointers, emphasizing portability across systems but increasing the risk of errors like dangling references, whereas higher-level languages abstract these details through standard libraries for broader interoperability.

Internal and External Storage

In internal storage, linked lists are implemented entirely within (RAM), where each resides as a contiguous structure containing data and pointers to adjacent nodes using direct memory addresses. This configuration enables rapid traversal and manipulation, with operations like insertion or deletion achieving amortized O(1) for adjacent elements due to immediate pointer dereferencing without intermediate loading delays. However, such lists are inherently volatile; all data is lost upon program termination, power loss, or system crash, necessitating periodic to persistent media for durability. External storage adapts linked lists for persistence by mapping nodes to fixed-size disk blocks, where traditional pointers are replaced by file offsets or block identifiers that reference the physical location of the subsequent node on secondary storage. This method, exemplified in file system linked allocation, allows files to span non-contiguous disk sectors, storing both payload data and the offset to the next block within each block to form a chain. Commonly employed in database engines and file systems for managing large, durable datasets, it avoids external fragmentation by dynamically linking available blocks, though it incurs overhead from the space allocated to offsets—typically 4 to 8 bytes per block depending on the addressing scheme. A primary challenge in external linked lists is the elevated from disk I/O operations; traversing the structure demands sequential for each , resulting in O(n/B) I/O complexity in the external memory model, where n is the list length and B is the disk block size (often 4-64 ). This contrasts sharply with in-memory O(1) per-step access, amplifying costs for linear scans or searches, especially on rotating media where seek times can exceed 10 per operation. Reliability issues, such as pointer from partial writes, further complicate , often mitigated by journaling or redundancy. Databases approximate external linking in structures like s, where internal nodes serve as linking hubs stored in disk pages, with child pointers as page offsets enabling balanced, logarithmic-depth navigation across terabyte-scale persistent storage. For instance, in systems like or , B-tree leaves hold data records linked via offsets, supporting efficient indexing while minimizing I/O through page-level caching. Synchronization in external linked list configurations, vital for multi-user database environments, relies on protocols to manage shared disk access and prevent anomalies like lost updates or dirty reads. Techniques such as multi-version concurrency control (MVCC) maintain historical node versions on disk, allowing readers to access consistent snapshots without blocking writers, while serializes modifications to offset chains. These methods ensure atomicity across I/O-bound operations, though they introduce versioning overhead—up to 2x storage amplification in high-contention scenarios.

Array-Based Representations

Array-based representations simulate linked lists by using a fixed-size to store , where each consists of the and an integer pointing to the next instead of a traditional pointer. This approach replaces addresses with array indices, typically using -1 to represent a . The primary advantages include improved locality due to contiguous allocation, which enhances traversal compared to scattered pointer-based nodes, and simplified since the entire structure is allocated at once without dynamic pointer handling. This representation also mimics the behavior of dynamic arrays while avoiding pointer arithmetic, making it suitable for environments with limited resources. In implementation, the array is often structured as a collection of records, with the head of the list indicated by the index of the first occupied (starting at 0), and an auxiliary "available" list to track free s for efficient insertion and deletion without immediate shifting. For insertion, a new is allocated from the free list by updating indices, while deletion links the previous to the next and reclaims the . Resizing, if needed, involves copying to a larger , which can lead to fragmentation in the old . Drawbacks encompass the fixed maximum size of the array, limiting the list's capacity without reallocation, and potential inefficiency in operations requiring frequent resizing or when the estimated size is inaccurate, leading to wasted space or overflow. Unlike true linked lists, random access is possible but traversal remains linear, and maintaining the free list adds minor overhead. The following pseudocode illustrates a basic array-based singly linked list with a fixed size of 100 nodes:
struct Node {
    int data;
    int next;  // index of next node or -1
};

Node array[100];
int head = -1;  // empty list
int avail = 0;  // first free slot

// Initialize free list
for (int i = 0; i < 99; i++) {
    array[i].next = i + 1;
}
array[99].next = -1;

// Insert at head (simplified)
void insert(int value) {
    if (avail == -1) return;  // full
    int newIndex = avail;
    avail = array[avail].next;
    array[newIndex].data = value;
    array[newIndex].next = head;
    head = newIndex;
}
This example uses indices for linking, with the available list enabling O(1) insertions without shifting. Such representations are particularly useful in embedded systems where pointers may be unavailable or undesirable due to memory constraints and the need for predictable allocation, allowing semantics without dynamic memory overhead.

Applications and Extensions

Common Applications

Linked lists are widely used to implement fundamental abstract data types such as stacks, queues, and deques, leveraging their dynamic nature for efficient insertions and deletions without fixed size constraints. Stacks, which follow last-in-first-out (LIFO) semantics, can be realized with singly linked lists by treating the head as the top, enabling constant-time push and pop operations. Queues, adhering to principles, utilize singly linked lists with operations at the head for dequeue and tail for enqueue, also achieving O(1) . Deques, supporting insertions and removals at both ends, are effectively implemented using doubly linked lists, which provide bidirectional access for balanced performance across all endpoints. In applications, doubly linked lists facilitate navigation features like browser history, where each represents a visited , allowing efficient traversal backward and forward without rebuilding the structure. This setup enables O(1) transitions between consecutive pages, supporting user actions such as the back and forward buttons in web browsers. System-level applications often employ circular linked lists for cyclic processes, such as task scheduling in operating systems, where processes are organized in a to ensure fair without a definitive end. Similarly, music playlists benefit from circular linking, enabling seamless repetition of tracks by the last back to the first, which simplifies continuous playback loops. In algorithmic contexts, linked lists support operations like polynomial arithmetic by representing sparse polynomials as chains of nodes, each storing a coefficient and exponent, which facilitates efficient addition and multiplication by aligning terms during traversal. Undo mechanisms in software, such as text editors, leverage linked list-based stacks to store action histories, allowing reversal of operations in LIFO order for features like multi-level undos. Contemporary applications include technology, where blocks form a singly linked structure via cryptographic hashes pointing to predecessors, ensuring tamper-evident immutability and chronological integrity in distributed ledgers. Skip lists extend the linked list structure by organizing nodes into multiple probabilistic layers, enabling expected O(log n) for search, insertion, and deletion operations, serving as an alternative to balanced binary search trees without the need for rotational balancing. Introduced by William Pugh in 1990, skip lists layer a standard linked list with higher-level "express" lists where each is promoted to the next layer with probability p (typically 1/2), allowing traversals to skip over multiple nodes efficiently. Self-adjusting lists, also known as self-organizing lists, modify a linear linked list dynamically based on access patterns to improve average search times through amortized analysis. Common heuristics include the move-to-front rule, which relocates the accessed node to the list head after each search, and the transpose rule, which swaps the accessed node with its predecessor; these can achieve O(1) amortized time for frequently accessed elements under certain access distributions. Sleator and Tarjan analyzed these in 1985, showing competitive ratios against optimal static lists, with move-to-front performing within a factor of 2 of the best possible organization for independent requests. Linked lists relate to other structures through their linear chaining: binary search trees build on similar node-pointer mechanisms but add branching for logarithmic search in ordered data, contrasting the O(n) worst-case traversal of plain lists. Graphs generalize linked lists by permitting arbitrary connections beyond sequential links, enabling representation of where lists serve as paths or adjacency lists. Array-based lists simulate linked behavior via index pointers but lack true dynamic linking, often used for fixed-size approximations. Hybrid structures like linked hash maps combine linked lists with hash tables to maintain insertion order or access order while providing average O(1) lookups, as implemented in Java's LinkedHashMap where entries form a doubly-linked list chained within hash buckets. data structures represent large strings as binary trees of smaller string leaves, akin to linked concatenations for efficient immutable operations like splitting and joining, avoiding the O(n) costs of traditional string copies in text editors or compilers. Boehm et al. described ropes in 1995 as supporting O(log n) concatenations and insertions for massive texts, with leaf nodes holding fixed substrings linked through internal balancing.

Historical Development

Origins

In the mid-1950s, the roots of linked lists emerged in assembly language programming for dynamic memory allocation, particularly through the Information Processing Language (IPL) developed by Allen Newell, J. C. Shaw, and Herbert A. Simon. Introduced in 1956 for the Logic Theorist program at RAND Corporation and Carnegie Mellon University, IPL enabled list structures that supported symbolic manipulation and recursion, allowing data to be linked via pointers in a way that accommodated variable-sized collections without predefined bounds. An earlier practical use appeared in 1953, when Peter Luhn implemented chaining-based hash tables using linked lists on the IBM 701. This innovation was driven by the constraints of early computers, such as the IBM 704, which had limited RAM (typically 4,096 to 32,768 words) and relied on fixed-size arrays that proved inadequate for complex, unpredictable data in artificial intelligence tasks. Early pointer concepts further advanced in 1957 with proposals for list processing in , where John McCarthy suggested extensions to handle symbolic expressions through linked representations, implemented as a library on the IBM 704. These efforts addressed the inflexibility of static arrays by enabling dynamic linking of data elements. The theoretical basis solidified in 1958 with McCarthy's conceptualization of (List Processing), which formalized linked lists as recursive structures using "cons cells"—binary pairs where each cell links a value to another cell, forming chains that supported collection and efficient manipulation of . This approach, motivated by the same limitations, established linked lists as a cornerstone for handling non-numeric, expandable datasets in early computing.

Evolution and Key Milestones

In the 1960s and 1970s, linked lists gained widespread adoption in operating systems for managing dynamic structures such as process queues and file systems. For instance, early versions of Unix, developed at starting in 1969, utilized linked lists to organize sleeping processes and other kernel data, enabling efficient insertion and removal in multitasking environments. This period also saw formal mathematical analysis of linked lists, with Donald Knuth's (1968) providing rigorous examinations of their for operations like traversal and splicing, establishing a foundational framework for their theoretical study. A key milestone was the publication of sentinel node techniques in algorithms texts during the 1970s, which simplified boundary checks in linked list operations by adding dummy nodes to avoid special cases for empty lists or ends, as detailed in Aho, Hopcroft, and Ullman's The Design and Analysis of Computer Algorithms (1974). The 1980s marked precursors to standardized implementations of linked lists in modern languages, with growing emphasis on doubly linked variants for bidirectional traversal. Researchers like began exploring concepts at , laying groundwork for reusable structures that influenced later standards; these efforts included s to support efficient deque operations in . By the early 1990s, this culminated in the (STL) for C++, proposed in 1994 and integrated into the ISO C++ standard, where the std::list container provided a doubly linked list implementation optimized for frequent insertions and deletions. (Note: While Wikipedia is not cited for content, the STL timeline is corroborated by primary standardization documents.) In the 1990s and , linked lists were further standardized through object-oriented frameworks. The , introduced in JDK 1.2 in 1998, included the LinkedList class as a doubly linked implementation of the List and Deque interfaces, enabling seamless integration in enterprise applications with optimizations for amortized constant-time access at both ends. This era emphasized performance enhancements, such as cache-friendly node alignments in Java's LinkedList to reduce overhead in environments. More recently, linked lists have integrated into distributed and systems. Apache , released in 2014, uses Java collections including linked lists for certain operations in memory-constrained environments. A notable application emerged in technology, where Satoshi Nakamoto's 2008 Bitcoin whitepaper described the as a chain of blocks timestamped and linked via hashes—essentially a singly linked list ensuring immutable sequencing of transactions without central authority. These developments highlight linked lists' enduring role in scalable, tamper-evident structures.

References

  1. [1]
    linked list
    linked list. (data structure). Definition: A list implemented by each item having a link to the next item. Also known as singly linked list.Missing: authoritative sources
  2. [2]
    [PDF] Linked List Basics - Stanford CS Education Library
    The list gets is overall structure by using pointers to connect all its nodes together like the links in a chain. Each node contains two fields: a "data" field ...
  3. [3]
    Implementing Lists Using Linked-Lists - cs.wisc.edu
    Comparison of Linked List Variations. The major disadvantage of doubly linked lists (over singly linked lists) is that they require more space (every node has ...
  4. [4]
    [PDF] CS112-2012S-18 Linked Lists 1 • Array Advantages
    18-1: Linked Lists. • Linked List Advantages. • Easy to resize a linked list. • Adding elements to the middle easy. • Likned List Disadvantages. • Elements are ...
  5. [5]
    Linked Lists
    A linked list is a recursive data structure that is either empty (null) or a reference to a node that contains a data item and a reference to another node. ...
  6. [6]
    Linked lists I - People @EECS
    However, linked lists have a big disadvantage compared to arrays. Finding the nth item of an array takes a tiny, constant amount of time. Finding the nth item ...<|control11|><|separator|>
  7. [7]
    [PDF] Linked Lists 1 - CS 15: Data Structures
    Linked list: easy! ‣ Just create a new node and update two pointers. • ArrayList: more costly. ‣ All elements ...
  8. [8]
    LinkedLists
    ### Summary of Linked Lists Content
  9. [9]
    Linked Lists - cs.wisc.edu
    We define a linked list: a sequence of elements containing data arranged one after the other, such that each element is connected by a link.
  10. [10]
    Singly-Linked Lists
    A linked list is a data structure where one object refers to the next one in a sequence by storing its address. Each object is referred to as a node.
  11. [11]
    Data Structures and Algorithms -- Class Notes, Section 1 - UF CISE
    A singly-linked list (SLL) is a list where each element has the following fields: Value: The value that is stored in the list element. This can be of any ...
  12. [12]
    Implementing Lists Using Linked-Lists - cs.wisc.edu
    The major disadvantage of doubly linked lists (over singly linked lists) is that they require more space (every node has two pointer fields instead of one).
  13. [13]
    Linked Lists
    A linked list is a data structure in which we store each item of data in its own small piece of memory. The pieces are called nodes and are linked together ...<|separator|>
  14. [14]
    [PDF] Linked List Problems - Stanford CS Education Library
    All of the linked list code in this document uses the "classic" singly linked list structure: A single head pointer points to the first node in the list ...<|control11|><|separator|>
  15. [15]
    CS140 Lecture notes -- Doubly Linked Lists - UTK-EECS
    Doubly linked lists are like singly linked lists, except each node has two pointers -- one to the next node, and one to the previous node. This makes life nice ...Missing: benefits drawbacks
  16. [16]
    [PDF] List Overview A Simple List Interface Generic Types List Data ...
    Doubly-Linked vs Singly-Linked. • Advantages of doubly-linked over singly-linked lists. – some things are easier – e.g., reversing a doubly- linked list can ...
  17. [17]
    PHP 7 Data Structures and Algorithms - Multi-linked lists - O'Reilly
    A multi-linked list, or multiply linked list, is a special type of linked list that has two or more links linking each node to another node.
  18. [18]
    [PPT] Slide 1
    CSCI 3333 Data Structures. 10. Multiply-linked list: Each node contains two or more link fields, each field being used to connect the same set of data records ...
  19. [19]
    A note on hash linking | Communications of the ACM
    Index Terms. A note on hash linking. Information systems · Data management systems · Middleware for databases · Distributed transaction monitors · Information ...
  20. [20]
    Chapter 18: Linked lists - UTK-EECS
    Lists are made up of nodes, where each node contains a pointer or reference to the next node in the list. In addition, each node usually contains a unit of data ...
  21. [21]
    Skip lists: a probabilistic alternative to balanced trees
    Skip lists are data structures that use probabilistic balancing rather than strictly enforced balancing. As a result, the algorithms for insertion and ...Missing: original | Show results with:original
  22. [22]
    Practical concurrent unrolled linked lists using lazy synchronization
    This work introduces a new high-performance concurrent unrolled linked list with a lazy synchronization strategy. Most write operations under this strategy ...
  23. [23]
    Circular, doubly linked lists with a sentinel
    Although doubly linked circular linked lists with sentinels are the easiest linked lists to implement, they can take a lot of space. There are two references ( ...
  24. [24]
    [PDF] Doubly Linked Lists - Colby College
    - Note: Header and trailer are sentinel nodes. A sentinel node is a specifically designed node used with linked lists and trees as a traversal path terminator.
  25. [25]
    Linked Lists
    A list with data keeps the data between the sentinels. The (head) and (tail) nodes do not hold data, and we will have to craft the code so that traversals ...
  26. [26]
    Linked Lists
    Often, doubly linked lists are implemented by using sentinel nodes. In the example above, the nodes header and trailer are dummy nodes that contain irrelevant ...
  27. [27]
    Assignment 5: Sentinel Lists – 600.226: Data Structures (Spring 2017)
    The LinkedList follows the basic pattern for linked lists we've used from the beginning of the course: There are references to the first and last node in the ...
  28. [28]
    [PDF] Lab 5. Linked Lists Goal: To learn how to build linked lists. Part 1 ...
    Empty list contains one node - the sentinel node. New node is added at the front, connecting its next field to the node which was the first in the original list ...
  29. [29]
    Untitled
    ... sentinel node is a dummy node at the front of a linked list. The reading refers to it as a header node. The value in the sentinel node is irrelevant and is ...<|control11|><|separator|>
  30. [30]
    [PDF] Linked Lists
    Given a singly linked list, devise a time- and space-efficient algorithm to find the mth-to-last element of the list. Implement your algorithm, taking care ...
  31. [31]
    Linked Lists
    Insertion into an empty list. --establish first node into list newNode.setLink(letters); letters = newNode;. Same proces as inserting into beginning.
  32. [32]
    [PDF] CS 3137, Class Notes 1 Linked Lists
    Then, you will never have a pointer to a list that is NULL, since even empty lists have a single, empty, header node. • Note that there is a native ...<|control11|><|separator|>
  33. [33]
    [PDF] Physics 2660 - Galileo
    The list handle identifies the list, and holds the information we need to have in order to use the list. Page 13. When talking about linked lists, we use the ...
  34. [34]
    Data Structures and Algorithms: Linked Lists
    Private data and link - cannot access data from pointer to node; Public data, or public access functions - violates encapsulation principle. Any function can ...
  35. [35]
    Linked list in Java:
    In this example, List.head is private, so it can only be accessed within the List ADT. ListNode.item and ListNode.next are declared "package", so they ...
  36. [36]
    Garbage collection - People @EECS
    Garbage collection begins at the roots. A _root_ is any object reference ... It's roughly like an invisible linked list that links _everything_.
  37. [37]
    Garbage Collection of Linked Data Structures - ACM Digital Library
    Garbage Collection of Linked Data Structures. Author: Jacques Cohen. Jacques ... COHEN, J. "Use of fast and slow memories in list-processmg languages ...
  38. [38]
    9.4. Linked Lists — OpenDSA Data Structures and Algorithms ...
    In this module we present one of the two traditional implementations for lists, usually called a linked list. The linked list uses dynamic memory allocation.
  39. [39]
    [PDF] Stacks, Queues and Linked Lists
    Conceptually: objects arranged in linear order. Differs from arrays in that in arrays index+1 gives next element; in linked list, we use a pointer. • Will see ...
  40. [40]
    Introduction to Linked Lists
    Inserting and deleting anywhere but at the end requires a lot of work. This is linear-time complexity: O(N); The time it takes to insert/delete is proportional ...
  41. [41]
    Linked Lists
    Mar 11, 2025 · for (Node<String> current = head; current != null; current = current.next) { doSomethingWith (current.item); }. 1.2 Inserting into Linked Lists.Missing: pseudocode | Show results with:pseudocode
  42. [42]
    [PDF] Linked Lists
    The simplest type of linked list is a singly linked list. We have a “node” which is generally a class or a struct. The node contains data and a pointer to the ...Missing: create | Show results with:create
  43. [43]
    [PDF] Data Structures and Algorithm Analysis - People - Virginia Tech
    May 18, 2012 · ... List Implementation. 97. 4.1.2 Linked Lists. 100. 4.1.3 Comparison of ... advantages of Java. Java is used here strictly as a tool to ...
  44. [44]
    [PDF] 1 Linked Lists - BYU's ACME Program
    In this lab we begin our study of data structures by constructing a generic linked list, then using it to implement a few common data structures. Introduction.<|separator|>
  45. [45]
    CS140 Lecture notes -- Singly Linked Lists - UTK-EECS
    Doubly linked lists are more useful, but if you find yourself tight on memory, you may find yourself forced to use singly linked lists, so you should know about ...
  46. [46]
    [PDF] Module 2: List ADT - Jackson State University
    Code 6: Run Time. Complexity Analysis. Page 15. Linked List. • A Linked List ... We typically use Linked Lists if there are more frequent insert/delete.
  47. [47]
    [PDF] Problem Set 1 - Introduction to Algorithms - MIT OpenCourseWare
    [44 points] Doubly Linked List. In Lecture 2, we described a singly linked list. In this problem, you will implement a doubly linked list, supporting some ...
  48. [48]
    [PDF] 6.087 Practical Programming in C, Lecture 6 - MIT OpenCourseWare
    Linked lists can be singly linked, doubly linked or circular. For now, we will focus on singly linked list. • Every node has a payload and a link to the ...
  49. [49]
    readme - People @EECS
    Goal: This lab will demonstrate how a sentinel node can simplify a doubly-linked list implementation. ... Your code should NOT use any "if" statements or ...
  50. [50]
    Linked lists - Learn C - Free Interactive C Tutorial
    Linked lists. Introduction. Linked lists are the best and simplest example of a dynamic data structure that uses pointers for its implementation.<|control11|><|separator|>
  51. [51]
  52. [52]
    LinkedList (Java Platform SE 8 ) - Oracle Help Center
    Doubly-linked list implementation of the List and Deque interfaces. Implements all optional list operations, and permits all elements (including null).
  53. [53]
  54. [54]
    How are deques in Python implemented, and when are they worse ...
    Jun 6, 2011 · A dequeobject is composed of a doubly-linked list of block nodes. So yes, a deque is a (doubly-)linked list as another answer suggests.Why is deque implemented as a linked list instead of a circular array?python: deque vs list performance comparison - Stack OverflowMore results from stackoverflow.com
  55. [55]
    Cons Cells (GNU Emacs Lisp Reference Manual)
    A cons cell is a data object that represents an ordered pair. That is, it has two slots, and each slot holds, or refers to, some Lisp object.
  56. [56]
    container/list - Go Packages
    Reference documentation for Go's standard library ... The Go module system was introduced in Go 1.11 and is the official dependency management solution for Go.Documentation · Overview · Index · Types
  57. [57]
    Why do linked lists use pointers instead of storing nodes inside of ...
    Apr 9, 2015 · Learning to develop a C-style linked-list is a good way to introduce yourself to low-level programming techniques (such as manual memory ...Linked List and Pointers in C [closed] - Stack OverflowWhat are real world examples of when Linked Lists should be used?More results from stackoverflow.com
  58. [58]
    File Allocation Methods - GeeksforGeeks
    Sep 12, 2025 · The allocation methods define how the files are stored in the disk blocks. There are three main disk space or file allocation methods.Missing: offsets | Show results with:offsets
  59. [59]
    [PDF] Basic External Memory Data Structures - Rasmus Pagh
    Linked lists provide an efficient implementation of ordered lists of elements, supporting sequential search, deletions, and insertion in arbitrary locations of ...<|separator|>
  60. [60]
    12.6. B-Trees — CS3 Data Structures & Algorithms - OpenDSA
    B-trees, or some variant of B-trees, are the standard file organization for applications requiring insertion, deletion, and key range searches.
  61. [61]
    [PDF] Memory-Optimized Multi-Version Concurrency Control for Disk ...
    This paper proposes a memory-optimized disk-based system using in-memory buffers and a multi-version concurrency control approach that keeps most versioning ...
  62. [62]
    Concurrency Control in DBMS - GeeksforGeeks
    Oct 27, 2025 · With concurrency control: The DBMS uses locks or timestamps to ensure updates occur sequentially and data remains correct. Concurrency Problems ...Concurrency Control Techniques · Timestamp-Based · Lock Based Protocol
  63. [63]
    CS 102 Notes: C Basics 2 - UTK-EECS
    used in systems programming, safety-critical codes, developmental work, embedded systems, more ... Array-based linked list (WOW!!) Below we have a ...
  64. [64]
    CIS Department > Tutorials > Software Design Using C++ > Linked ...
    Oct 3, 2023 · A linked list is a sequence of items that can only be accessed in order (that is, from first to last). Each data item is stored in a node which also contains a ...
  65. [65]
    1.3 Bags, Queues, and Stacks - Algorithms, 4th Edition
    Feb 13, 2020 · To implement previous() we can use a doubly-linked list. Program ... Knuth calls it an output-restricted deque. Implement it using a ...
  66. [66]
    [PDF] STACKS, QUEUES, AND LINKED LISTS - Purdue Computer Science
    • A Java code example: public static Integer[] reverse(Integer[] a) ... Deletions at the tail of a singly linked list cannot be done in constant time.
  67. [67]
    ICS 46 Spring 2022, Notes and Examples: Stacks, Queues, and ...
    Because a deque requires us to be able to add and remove elements at either end of the list, we'll need a doubly-linked list with head and tail to do so ...
  68. [68]
    CS 106X | Assignment 5 - Stanford University
    In this assignment, you'll use classes and linked lists to implement several features found in a web browser. Specifically, the starter code contains logic ...
  69. [69]
    [PDF] Circularly Linked Lists and Doubly Linked ... - CSE214 Data Structures
    Circularly Linked List. ▫ Cyclic order. ▫ Well‐defined neighboring relationships, but no fixed beginning or end. ▫ E.g. round‐robin scheduling.Missing: benefits | Show results with:benefits
  70. [70]
    Music Playlist - Data Structures
    Therefore, playlists (which are implemented using circular linked lists) can be flexibly created by having a reference to the song object rather than storing ...
  71. [71]
    [PDF] Lecture P8: Pointers and Linked Lists - cs.Princeton
    Linked List for Polynomial. C code to represent of x9 + 3x5 + 7. □. Statically, using nodes. link up nodes define node to store two integers memory address.
  72. [72]
    [PDF] Algorithms ROBERT SEDGEWICK | KEVIN WAYNE - cs.Princeton
    Feb 12, 2019 · Linked-list implementation. ・Every operation takes constant time in ... ・Undo in a word processor. ・Back button in a Web browser ...
  73. [73]
    16.1. Understanding Blockchain — CS3 Data Structures & Algorithms
    A blockchain is nothing more than a chain of blocks in a linked list, where each block incorporates into its data the hash of the block that comes before it.
  74. [74]
    Amortized analyses of self-organizing sequential search heuristics
    This paper examines a class of heuristics for maintaining a sequential list ... Each set forms a spectrum, with the well-known move-to-front and transposition ...
  75. [75]
    [PDF] Amortized Efficiency Of List Update and Paging Rules
    These are the “move-to-front'' rule for maintaining an unsorted linear list used to store a set and the “least recently used” replacement rule for reducing page.
  76. [76]
    LinkedHashMap (Java Platform SE 8 ) - Oracle Help Center
    A linked hash map has two parameters that affect its performance: initial capacity and load factor. They are defined precisely as for HashMap. Note, however ...Missing: paper | Show results with:paper
  77. [77]
    [PDF] Ropes: an Alternative to Strings - Department of Computer Science
    Our data structure is quite different from that of Reference 17 to accommodate fast concatenation, the small list elements (characters), lazily evaluated.
  78. [78]
    [PDF] Self-Reproducing Automata - JOHN VON NEUMANN
    In the late 1940's John von Neumann began to develop a theory of automata. He envisaged a systematic theory which would be mathematical and logical in form ...<|control11|><|separator|>
  79. [79]
    [PDF] An Introduction to Information Processing Language V
    This paper is an informal introduction to Information. Processing Language V (IPL-V), a symbol and list-struc- ture manipulating language presently ...
  80. [80]
    LISP prehistory - Summer 1956 through Summer 1958.
    Jul 26, 1996 · The address of the machine was 15 bits, so it was clear that list structure should use 15 bit pointers. ... FORTRAN List Processing Language.
  81. [81]
    [PDF] Recursive Functions of Symbolic Expressions and Their ...
    Compositions of cons form expressions of a given structure out of parts. The class of functions which can be formed in this way is quite limited and not ...Missing: cells | Show results with:cells
  82. [82]
    The Design of the UNIX Operating System 8120305167 ...
    The Design of the UNIX Operating System 8120305167, 9788120305168. 3,399 408 ... ready to run," removes the process from the linked list of sleeping processes ...
  83. [83]
    The art of computer programming, volume 1 (3rd ed.)
    ... 1-Mar-1968. Show All Cited By. Contributors. Donald E. Knuth ... Linked lists · Mathematics of computing · Discrete mathematics · Graph theory.
  84. [84]
    [PDF] Algorithms and Data Structures - Ethz
    An alternative is the dynamic, linked allocation; that is, inside each node there exists a linked list of items of length 1 or 2. Since each node has at ...
  85. [85]
    The Story of the Standard Template Library (STL) in C++
    Jan 24, 2025 · In 1994, the STL was proposed to the C++ Standardization Committee (WG21) as part of the C++ standardization process. It aimed to ...
  86. [86]
    Standard Template Library - Wikipedia
    The STL was created as the first library of generic algorithms and data structures for C++, with four ideas in mind: generic programming, abstractness without ...History · Composition · Criticisms · Implementations
  87. [87]
    Tuning - Spark 4.0.1 Documentation - Apache Spark
    Common collection classes, such as HashMap and LinkedList , use linked data structures, where there is a “wrapper” object for each entry (e.g. Map.Entry ).
  88. [88]
    [PDF] A Peer-to-Peer Electronic Cash System - Bitcoin.org
    Abstract. A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a.