Fact-checked by Grok 2 weeks ago

Circular buffer

A circular buffer, also known as a ring buffer or cyclic buffer, is a fixed-size that implements a first-in, first-out () queue using a single , where the end of the array logically connects back to the beginning to allow continuous wrapping around without shifting elements. This design treats the array as circular, enabling efficient insertion and removal operations at constant , O(1), by managing read and write positions via . The core components of a circular buffer include the fixed-length , a head (pointing to the next element to be read or dequeued), and a (pointing to the position for the next insertion or enqueue). When the reaches the end of the , it wraps around to 0 using operations, such as tail = (tail + 1) % capacity, preventing the need to reallocate or move data. To distinguish between empty and full states—both of which can result in head equaling —a common implementation reserves one slot unused or tracks the current size explicitly. This structure is particularly suited for scenarios with bounded, predictable data volumes, as its size cannot be dynamically resized without reinitialization. The concept of the circular buffer dates back to early computing systems, with implementations appearing in hardware as early as the and for managing data in and I/O operations. It gained prominence in software for efficient buffering in operating systems and systems during the latter half of the . Circular buffers offer several advantages over linear arrays or linked lists for implementations, including no overhead from pointer traversal and avoidance of element shifting during wrap-around, leading to predictable performance in resource-constrained environments. They are widely applied in for buffering , such as in producer-consumer scenarios where one generates data and another consumes it, often using for . Common domains include operating systems for I/O buffering, networking protocols to handle packet streams, and audio processing to manage continuous signal flows without latency spikes. In systems and software radios, they facilitate efficient data handling between and applications, supporting low-latency operations in and communication pipelines.

Introduction

Definition and Characteristics

A circular buffer, also known as a ring buffer, is a fixed-size that uses a contiguous to store elements in a first-in, first-out () manner, logically connecting the end of the array back to the beginning to enable continuous reuse of space. This structure treats the array as if its endpoints are joined, allowing data to wrap around seamlessly when the end is reached, thereby functioning as an efficient without requiring additional allocation. Key characteristics of a circular buffer include its fixed , which remains constant throughout its use and does not support dynamic resizing, ensuring predictable memory usage. Insert and remove operations achieve average constant-time complexity of O(1), as they involve simple index updates rather than element shifting or reallocation. When the buffer reaches full , new insertions may be blocked, discarded, or in some implementations overwrite the oldest elements to maintain bounded size. In contrast to linear buffers or queues, which require shifting elements to make space for new insertions after dequeuing, a circular buffer achieves superior space utilization by overwriting freed positions directly through wrap-around, avoiding any data movement and thus minimizing overhead in resource-constrained environments. Mathematically, a circular buffer can be represented as an B of fixed size N, where logical positions for insertion and removal are computed using arithmetic to cycle indices: for example, the next position after index i is (i + 1) \mod N. This operation ensures that the buffer's effective length wraps around the array boundaries, preserving contiguity in logical access.

Historical Background

The concept of the circular buffer emerged in early during the 1950s as part of efforts to optimize memory usage in systems reliant on input and , where fixed-size buffers helped manage data flow without frequent reallocations. Early computers like the LEO I (1951) used multiple buffers to handle different data streams simultaneously in business applications. In the , operating systems like IBM's OS/360 (1966) advanced I/O buffering for and , contributing to the evolution of efficient queue structures including circular buffers. This period marked a shift toward standardized buffer designs for system reliability in emerging applications. The 1980s saw widespread adoption in embedded software, particularly in real-time operating systems like VxWorks, first released in 1987, where ring buffers were used for serial device I/O to ensure low-latency data transfer in resource-constrained environments. By the 1990s, the structure became integral to networking protocols, facilitating circular queuing for packet buffering in high-throughput data transmission. Post-2000, circular buffers integrated into with the rise of multi-core processors, enhancing concurrent access and scalability in frameworks like those in modern networking stacks (as of 2025). Their evolution into standards, including POSIX-compliant implementations in OS like , solidified their role in efficient across computing paradigms.

Core Mechanics

Basic Operations

The enqueue operation in a circular buffer adds an to the , which is the next available slot at the end of the current data sequence. The index is then updated using to wrap around to the beginning of the buffer when it reaches the end: tail = (tail + 1) % N, where N is the buffer size. Before adding, the operation checks if the buffer is full; if so, it typically returns an or discards the element depending on the implementation. The dequeue operation removes and returns the at the head position, which points to the oldest data in the . The head index is then incremented with wrap-around: head = (head + 1) % N. It first checks if the is empty; if empty, it returns an . This wrap-around behavior enables efficient reuse of the fixed-size storage without shifting . for these operations, using a of elements (num) to handle full and empty states, is as follows: Enqueue(newValue):
if num == N then
    return [error](/page/Error)  // Buffer full
A[tail] = newValue
tail = (tail + 1) % N
num += 1
Dequeue():
if num == 0 then
    return [error](/page/Error)  // Buffer empty
firstValue = A[head]
head = (head + 1) % N
num -= 1
return firstValue
Both enqueue and dequeue operations achieve constant of O(1), as they involve only index updates and direct access without resizing or shifting elements. Consider a circular buffer of size N=4, initialized with head=0, tail=0, and num=0. Enqueue 10: A=10, tail=1, num=1. Enqueue 20: A=20, tail=2, num=2. Enqueue 30: A=30, tail=3, num=3. Enqueue 40: A=40, tail=0, num=4 (now full). Dequeue: returns 10, head=1, num=3. Enqueue 50: A=50, tail=1, num=4. Dequeue: returns 20, head=2, num=3. This trace illustrates filling to capacity, consuming from the front, and overwriting after wrap-around once space is freed.

Wrap-Around and State Management

The wrap-around mechanism in a circular buffer relies on modulo arithmetic to handle indexing, allowing the buffer to reuse its fixed-size storage as if it were an infinite linear . Specifically, to compute an index for a position offset from the start, the formula \text{index} = (\text{start} + \text{offset}) \mod N is applied, where N is the buffer size; this ensures that when an index exceeds or equals N, it wraps back to 0, maintaining contiguity in a looped structure. State management in circular buffers typically involves tracking two pointers: the head (or read pointer), which indicates the position of the next element to be dequeued, and the tail (or write pointer), which marks the position for the next enqueue operation. The buffer is considered empty when the head and tail pointers are equal (\text{head} == \text{tail}), signifying no elements are stored. Conversely, the buffer is full when advancing the tail would immediately catch up to the head, formally (\text{tail} + 1) \mod N == \text{head}, indicating that the buffer's capacity has been reached. A key challenge in this state management arises from the ambiguity when \text{head} == \text{tail}, as this condition can represent either an empty or a just-filled , potentially leading to incorrect operations like erroneous overwrites or failed reads. To resolve this without additional storage for flags, a common solution is to reserve one slot in the , effectively using a capacity of N-1 elements; this prevents the pointers from fully overlapping in the full state, allowing \text{head} == \text{tail} to unambiguously denote empty while full is detected via the reserved slot condition. From a logical , the circular buffer presents elements as a continuous regardless of physical boundaries, achieved through pointer that simulates a ; physically, however, the resides in a linear where elements may appear scattered across the end and beginning after wrap-around, but the modulo-based access restores the illusion of seamlessness.

Implementations

Array-Based Design

The array-based design of a circular buffer relies on a single contiguous of fixed size N, paired with head and tail pointers initialized to 0 to denote the next dequeue and enqueue positions, respectively. This layout ensures constant-time access to elements through direct indexing, while wrap-around behavior is managed via arithmetic on the pointers when they reach the array bounds. Memory allocation varies by language: and static provides efficient storage for homogeneous elements like egers or bytes, declared as int buffer[N];. In , allocation uses a fixed-capacity , such as Object[] buffer = new Object[capacity];, supporting generics for type-safe handling of any object type. Initialization involves defining the capacity N, allocating the accordingly, and resetting the head and pointers to 0 to prepare for initial insertions. The following C code snippet illustrates a basic array declaration and pointer setup:
c
#define BUFFER_SIZE 10
typedef struct {
    int buffer[BUFFER_SIZE];
    int head;
    int tail;
} CircularBuffer;

CircularBuffer cb = { .head = 0, .tail = 0 };
```[](https://www.geeksforgeeks.org/c/c-program-to-implement-circular-queue/)

### Detecting Full and Empty States

Detecting the full and empty states of a circular buffer is essential for proper operation, as the wrap-around nature can make these conditions ambiguous if not handled carefully. The standard approach uses head and tail pointers on a fixed-size [array](/page/Array) of length $N$, intentionally leaving one slot unused to distinguish states without additional variables. The buffer is empty when the head pointer (pointing to the next element to read) equals the tail pointer (pointing to the next position to write). It is full when advancing the tail by one position would equal the head, computed as $(tail + 1) \mod N == head$. This method ensures unambiguous detection through pointer comparisons alone and is commonly implemented in [systems programming](/page/Systems_programming) contexts.[](http://homepage.cs.uiowa.edu/~jones/opsys/spring02/notes/06.html)[](https://docs.kernel.org/core-api/circular-buffers.html)

Alternative techniques address the space inefficiency of the standard method by utilizing the full capacity of $N$ slots. One common variant maintains an integer count variable tracking the number of elements currently stored; the buffer is empty if $count == 0$ and full if $count == N$. Updates to count occur during enqueue (increment) and dequeue (decrement) operations. Another approach employs a [boolean](/page/Boolean) flag set to indicate full or empty whenever head equals tail, requiring the flag to be toggled appropriately during operations. These methods are proposed in [computer science](/page/Computer_science) curricula as ways to avoid wasting space, though they introduce extra [state management](/page/State_management).[](http://homepage.cs.uiowa.edu/~jones/opsys/spring02/notes/06.html)[](https://eclipse.umbc.edu/robucci/cmpeRSD/Lectures/L07__Ch3_Ch4_Implementation/)

The standard pointer-based method simplifies implementation logic by relying solely on index arithmetic but sacrifices approximately $1/N$ of the buffer's capacity, which can be significant for small $N$. In contrast, count or flag-based approaches enable full utilization of the array but require additional memory (typically 4-8 bytes for count or 1 byte for a flag) and slightly more complex update logic to maintain accuracy, especially in concurrent settings. The choice depends on constraints like memory availability and whether maximum throughput is prioritized over simplicity.[](http://homepage.cs.uiowa.edu/~jones/opsys/spring02/notes/06.html)[](https://eclipse.umbc.edu/robucci/cmpeRSD/Lectures/L07__Ch3_Ch4_Implementation/)

Edge cases arise in initialization, wrap-around, and overflow scenarios. Initially, the buffer starts empty with head and tail both at 0 (or count at 0 and flag indicating empty), ensuring the empty condition holds without elements present. After wrap-around, where pointers exceed $N-1$ and modulo brings them back to 0, the detection conditions remain valid due to the modular arithmetic. For overflow—attempting to enqueue when full—handling varies by application: options include returning an error to prevent addition, blocking until space is available, or discarding the oldest element by advancing the head and overwriting (effectively treating it as a fixed-size sliding window). Kernel implementations often check available space before enqueue and drop items if full to avoid blocking.[](http://homepage.cs.uiowa.edu/~jones/opsys/spring02/notes/06.html)[](https://docs.kernel.org/core-api/circular-buffers.html)

The following [pseudocode](/page/Pseudocode) illustrates detection functions for the standard pointer method and the count-based alternative, assuming a buffer array of size $N$:

**Standard Pointer Method:**
bool isEmpty() { return head == tail; } bool isFull() { return (tail + 1) % N == head; }

**Count-Based Method:**
bool isEmpty() { return count == 0; } bool isFull() { return count == N; }

These functions can be inlined for efficiency and are typically called before enqueue or dequeue operations.[](http://homepage.cs.uiowa.edu/~jones/opsys/spring02/notes/06.html)[](https://eclipse.umbc.edu/robucci/cmpeRSD/Lectures/L07__Ch3_Ch4_Implementation/)

## Applications

### Software and Data Processing

In software applications, circular buffers are widely employed to manage [streaming data](/page/Streaming_data), particularly in audio and video playback systems where input rates can vary due to network latency or processing delays. For instance, in audio [decoders](/page/Decoder) such as those handling MP3 bitstreams, a circular buffer serves as an input bitstream buffer to smoothly handle compressed data arrival, preventing underruns during decoding by allowing the decoder to read from the buffer at a constant rate while new data overwrites older portions as needed.[](https://www.ti.com/lit/pdf/spraaj6) Similarly, in video streaming, circular buffers facilitate FIFO-ordered playback by storing frames temporarily, enabling smooth rendering even when data arrives irregularly, thus minimizing playback interruptions.[](https://www.baeldung.com/cs/circular-buffer)

In networking protocols like [TCP/IP](/page/TCP/IP) stacks, circular buffers function as efficient packet queues for both receive and transmit operations, buffering incoming or outgoing packets to decouple the network interface controller from higher-layer processing. Receive ring buffers, for example, allow the [NIC](/page/NIC) to [DMA](/page/DMA) packets directly into [kernel](/page/Kernel) memory via a fixed-size circular structure, reducing CPU overhead and enabling high-throughput handling of bursts without dynamic allocation.[](https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/monitoring_and_managing_system_status_and_performance/tuning-the-network-performance_monitoring-and-managing-system-status-and-performance) Transmit buffers operate analogously, queuing packets for transmission while the driver manages wrap-around to maintain low-latency flow control.[](https://www.cisco.com/c/en/us/support/docs/asynchronous-transfer-mode-atm/ip-to-atm-class-of-service/6142-txringlimit-6142.html)

Circular buffers also play a key role in [logging](/page/Logging) and tracing mechanisms within [debugging](/page/Debugging) tools, where they act as [ring](/page/Ring) buffers to capture recent events without unbounded memory growth. In [kernel](/page/Kernel)-level [logging](/page/Logging), such as Linux's [dmesg](/page/Dmesg) facility, the [kernel](/page/Kernel) [ring](/page/Ring) buffer stores boot, hardware, and process events in a circular fashion, overwriting oldest entries when full to provide a [snapshot](/page/Snapshot) of the system's recent [state](/page/State) for post-mortem [analysis](/page/Analysis).[](https://linux-audit.com/system-administration/commands/dmesg/) This approach is particularly valuable in [embedded](/page/Embedded) or high-volume tracing scenarios, where tools dump the buffer contents only upon errors or triggers, ensuring efficient storage of diagnostic data.[](http://www.exampler.com/writing/ring-buffer.pdf)

In graphics rendering pipelines, circular buffers support frame buffering techniques like double-buffering, where two or more [buffer](/page/Buffer)s alternate between writing (rendering) and reading (display) to eliminate visual artifacts such as tearing. The front [buffer](/page/Buffer) displays the current [frame](/page/Frame) while the back [buffer](/page/Buffer) is filled with the next, and upon completion, they swap roles in a circular manner, ensuring [atomic](/page/Atomic) updates for smooth [animation](/page/Animation).[](https://learn.microsoft.com/en-us/windows/win32/direct3d12/fence-based-resource-management) More advanced pipelines may extend this to ring buffers holding multiple frames for [resource management](/page/Resource_management), such as upload heaps in Direct3D12, allowing the GPU to process frames asynchronously without stalling the CPU.[](https://learn.microsoft.com/en-us/windows/win32/direct3d12/fence-based-resource-management)

A notable example of circular buffers in concurrent software processing is the RingBuffer in Java's LMAX Disruptor library, designed for high-throughput, low-latency event handling in multi-threaded environments. The Disruptor uses a pre-allocated, fixed-size ring buffer to enable mechanical sympathy with hardware, minimizing contention through single-writer principles and wait-free reads, achieving millions of operations per second for tasks like order processing in financial systems.[](https://lmax-exchange.github.io/disruptor/user-guide/index.html) This structure supports producer-consumer patterns by sequencing events in a circular [array](/page/Array), where consumers advance via sequence numbers, facilitating efficient data exchange without traditional [queue](/page/Queue) locks.[](https://www.baeldung.com/lmax-disruptor-concurrency)

### Hardware and Real-Time Systems

In [embedded](/page/Embedded) systems, circular buffers are widely employed by [direct memory access](/page/Direct_memory_access) (DMA) controllers to efficiently handle sensor data streams, such as those from analog-to-digital converters ([ADCs](/page/ADC)) in microcontrollers. This approach allows continuous [data acquisition](/page/Data_acquisition) without CPU intervention, where the DMA transfers ADC samples into a fixed-size [buffer](/page/Buffer) that wraps around upon reaching the end, ensuring seamless buffering of high-frequency inputs like audio or environmental sensors. For instance, in [STM32](/page/STM32) microcontrollers, the DMA peripheral supports a circular mode that automatically reloads the buffer address after completion, facilitating uninterrupted data collection from ADCs for applications in [real-time](/page/Real-time) [monitoring](/page/Monitoring).[](https://deepbluembedded.com/stm32-adc-continuous-conversion-mode-dma-poll-interrupt-single-channel/)

In real-time operating systems (RTOS) like [FreeRTOS](/page/FreeRTOS), circular buffers underpin task scheduling queues for [interrupt](/page/Interrupt) handling, enabling efficient management of asynchronous events in time-critical environments. [Interrupt](/page/Interrupt) service routines (ISRs) can enqueue data into these buffers, which tasks then dequeue with bounded latency, preventing [priority inversion](/page/Priority_inversion) and ensuring deterministic response times. [FreeRTOS](/page/FreeRTOS) stream and message buffers, implemented as circular structures, specifically support passing byte streams or discrete messages from ISRs to tasks, optimizing for low-overhead communication in resource-constrained systems.[](https://www.freertos.org/Documentation/02-Kernel/02-Kernel-features/04-Stream-and-message-buffers/02-Stream-buffer-example)

Circular buffers are integral to [signal processing](/page/Signal_processing) in [digital signal processor](/page/Digital_signal_processor) ([DSP](/page/DSP)) chips, particularly for [finite impulse response](/page/Finite_impulse_response) ([FIR](/page/Fir)) and [infinite impulse response](/page/Infinite_impulse_response) ([IIR](/page/Infinite_impulse_response)) filters that process continuous data streams. In [FIR](/page/Fir) filters, the buffer stores recent input samples in a circular fashion, allowing efficient [convolution](/page/Convolution) with filter coefficients without shifting large arrays, which is crucial for real-time audio or radar applications. [DSP](/page/DSP) architectures, such as those in [Texas Instruments](/page/Texas_Instruments) [TMS320](/page/TMS320) series, leverage circular addressing modes to implement these filters, minimizing memory access overhead and enabling sustained throughput for infinite input streams.[](https://users.ece.utexas.edu/~mcdermot/arch/0LD/lectures/Lecture_9.pdf)[](https://www.engr.colostate.edu/ECE423/lab04/Lab_Notes/Lab_Notes_4.pdf)

For peripheral interfaces like UART and [SPI](/page/SPI) in [firmware](/page/Firmware), circular buffers manage input/output (I/O) operations autonomously via [DMA](/page/DMA), reducing CPU load in [embedded](/page/Embedded) [firmware](/page/Firmware). In UART reception, for example, the buffer captures incoming serial data in a loop, allowing the [firmware](/page/Firmware) to process packets at its own pace while the hardware handles overflow prevention through wrap-around. Microcontroller vendors like [STMicroelectronics](/page/STMicroelectronics) integrate circular DMA modes for UART and [SPI](/page/SPI), enabling half-duplex communication in protocols where data arrives unpredictably, such as in sensor networks.[](https://controllerstech.com/stm32-uart-4-receive-data-using-dma/)

A prominent example is in automotive electronic control units (ECUs), where circular buffers queue Controller Area Network (CAN) bus messages to handle variable traffic loads without data loss. In ECUs processing vehicle diagnostics or control signals, the buffer stores incoming CAN frames in a ring structure, allowing prioritized dequeuing by the ECU firmware while the bus operates at high speeds. This design ensures reliable message queuing in safety-critical systems, as demonstrated in implementations for ECU fingerprinting and fault detection, where synchronization between CAN transceivers and processing units relies on circular buffering to maintain timing integrity.[](https://ieeexplore.ieee.org/document/10902101)[](https://digitalcommons.calpoly.edu/cgi/viewcontent.cgi?article=4036&context=theses)

## Advanced Topics

### Performance Optimizations

To enhance the [performance](/page/Performance) of circular buffers, particularly in high-throughput scenarios, developers often prioritize [cache](/page/Cache) efficiency by aligning the underlying [array](/page/Array) to [cache](/page/Cache) line boundaries, typically 64 bytes on modern processors. This [alignment](/page/Alignment) minimizes [cache](/page/Cache) line splits and [false sharing](/page/False_sharing), ensuring that read and write [operations](/page/Operation) to head and tail pointers access contiguous memory without evicting unrelated data from the [cache](/page/Cache). Additionally, using buffer sizes that are powers of two allows the [modulo](/page/Modulo) operation for index wrapping—([index](/page/Index) % size)—to be replaced with a faster bitwise AND operation ([index](/page/Index) & (size - 1)), reducing computational overhead in hot paths.[](https://fast.dpdk.org/doc/pdf-guides-16.04/prog_guide-16.04.pdf)[](https://www.arxiv.org/pdf/2507.22764)

Branch prediction can be improved by restructuring operations to avoid conditional branches in performance-critical code. For instance, unconditional increments of indices followed by masking, rather than explicit if-statements for wrap-around checks, reduce branch mispredictions, as modern CPUs predict taken branches less accurately under variable loads. This approach keeps the instruction pipeline fuller, especially in single-producer/single-consumer (SPSC) scenarios where predictability is high.

In multi-threaded environments, lock-free designs leverage atomic operations, such as [compare-and-swap](/page/Compare-and-swap) (CAS), to update shared pointers without traditional locks, avoiding contention and context switches. These implementations, often built on ring buffers with indirection for scalability, use fetch-and-add (FAA) or CAS on indices to ensure progress guarantees, achieving high throughput even under concurrent access by multiple producers and consumers. For example, the Scalable Circular Queue (SCQ) employs FAA for enqueue operations, demonstrating performance comparable to or exceeding state-of-the-art lock-free queues while maintaining portability across architectures.[](https://arxiv.org/pdf/1908.04511)[](https://drops.dagstuhl.de/storage/00lipics/lipics-vol146-disc2019/LIPIcs.DISC.2019.28/LIPIcs.DISC.2019.28.pdf)

Memory usage optimizations focus on eliminating extra flags for full/empty states through clever sizing strategies, such as reserving one slot in a power-of-two [buffer](/page/Buffer) to distinguish conditions via pointer comparisons alone, thus avoiding additional [atomic](/page/Atomic) variables. In [data processing](/page/Data_processing) applications, SIMD instructions can accelerate bulk enqueues or dequeues by vectorizing memory copies or computations over buffer segments, particularly useful in [digital signal processing](/page/Digital_signal_processing) where aligned circular buffers enable efficient [FIR](/page/F.I.R.) filter implementations without address recalculation.[](https://ez.analog.com/dsp/sharc-processors/f/q-a/61614/how-to-use-simd-to-speed-up-fir-filter-with-circular-buffering)

Benchmark studies highlight the impact of these techniques; for instance, power-of-two sizing and [cache](/page/Cache) alignment can improve enqueue/dequeue throughput compared to non-optimized arrays, as measured in networking workloads where [modulo](/page/Modulo) replacements and alignment reduce [latency](/page/Latency) by minimizing [cache](/page/Cache) misses and arithmetic costs. Lock-free variants further amplify this, with evaluations showing substantial improvements over mutex-protected buffers in multi-core settings due to reduced [synchronization](/page/Synchronization) overhead.[](https://fast.dpdk.org/doc/pdf-guides-16.04/prog_guide-16.04.pdf)[](https://arxiv.org/pdf/1908.04511)

### Variants and Extensions

The fixed-length element variant of the circular buffer, often referred to as a ring buffer, consists of a homogeneous [array](/page/Array) of fixed-size elements, such as bytes or [primitive](/page/Primitive) types, arranged in a contiguous [memory](/page/Memory) block that logically wraps around upon reaching the end. This design ensures efficient [FIFO](/page/FIFO) operations for uniform data streams, commonly used in low-level systems like [network packet](/page/Network_packet) handling or audio [processing](/page/Processing) where predictability in memory access is critical.[](https://embeddedartistry.com/blog/2017/05/17/creating-a-circular-buffer-in-c-and-c/) For instance, byte rings serve as standard buffers for [streaming data](/page/Streaming_data) in [embedded](/page/Embedded) environments, minimizing overhead by avoiding dynamic allocation.[](https://www.sciencedirect.com/topics/computer-science/circular-buffer)

In contrast, the contiguous-block variant adapts the circular buffer to handle variable-size data blocks by incorporating [metadata](/page/Metadata) to track block lengths, with head and tail pointers represented as byte offsets rather than discrete indices. A prominent example is the BipBuffer, a bi-partite structure that partitions the buffer into two segments to guarantee contiguous writes without fragmentation, even when wrapping around, which is particularly useful for [streaming media](/page/Streaming_media) or [protocol](/page/Protocol) implementations requiring unbroken data chunks.[](https://ferrous-systems.com/blog/lock-free-ring-buffer/) This variant is applied in file I/O caching within operating systems, where it manages sequential reads and writes to disk by buffering variable-length sectors in a circular manner, reducing [seek](/page/SEEK) times and improving throughput for continuous data flows.[](https://www.cs.fsu.edu/~baker/opsys/notes/io.html)

Unbounded circular buffers extend the fixed-size model by enabling dynamic resizing, often achieved through linking multiple fixed buffers or reallocating underlying storage as needed.[](http://psy-lob-saw.blogspot.com/2014/04/notes-on-concurrent-ring-buffer-queue.html) The Boost.CircularBuffer library, for example, provides a space-optimized variant that adjusts capacity dynamically while maintaining circular semantics, accommodating growth without overwriting data until explicitly configured.[](https://www.boost.org/doc/libs/1_84_0/doc/html/circular_buffer.html) This approach approximates unbounded behavior in structures like Java's [ArrayBlockingQueue](/page/Array), which employs a fixed circular [array](/page/array) for bounded concurrency but inspires resizable extensions in high-level [queue](/page/Queue) designs.[](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ArrayBlockingQueue.html)

For concurrent environments, multi-producer/multi-consumer variants build on bounded circular buffers by incorporating mechanisms like sequence numbers and barriers to synchronize access without traditional locks, enabling high-throughput processing in multi-threaded systems. Recent developments include BlockFIFO and MultiFIFO (as of 2025), which introduce scalable relaxed queues based on ring buffers to further enhance performance in concurrent settings by allowing bounded approximations of [FIFO](/page/FIFO) order with improved [scalability](/page/Scalability).[](https://arxiv.org/pdf/2507.22764) The LMAX Disruptor pattern exemplifies this, using a fixed-size ring buffer with sequence barriers to enforce dependencies among multiple consumers, ensuring ordered event processing while supporting multiple producers through [atomic](/page/Atomic) sequence updates.[](https://lmax-exchange.github.io/disruptor/user-guide/index.html)[](https://trishagee.com/2011/07/10/dissecting_the_disruptor_wiring_up_the_dependencies/) These extensions are vital in low-latency applications, such as financial trading platforms, where they achieve millions of operations per second by minimizing contention.[](https://lmax-exchange.github.io/disruptor/disruptor.html)

References

  1. [1]
    circular buffer data structure
    Circular buffer = a data structure that uses an array as if it were connected "end-to-end". Schematically: This data structure is also known as: ...
  2. [2]
    CS3650 Computer Systems – Assignment 4: Data Structures in C
    There are no pointers to traverse. However, because it uses a fixed size array, the size of the circular buffer can't be increased. On the other hand, linked ...
  3. [3]
    Project 1: Sorted Square List with Circular Buffers - UMBC
    Feb 23, 2018 · A circular buffer, typically used to implement a queue (review queues if you don't remember), is a tweak on arrays that solves the problem of ...
  4. [4]
    [PDF] CS 105 Ring Buffer
    A ring buffer, also called a circular buffer, is a common method of sharing information between a producer and a consumer.
  5. [5]
    [PDF] Operating Systems Mechanisms For Continuous Media
    Source and sinks may be able to generate or accept data in more than one format, and mixers may be able to convert between formats. Given sources and sinks, the ...
  6. [6]
    [PDF] A Simplified Approach to High Quality Music and Sound over IP
    Buffering of audio data which is needed to safeguard against delivery uncertainties can cause signal delays of seconds. Audio is in general an unfor- giving ...
  7. [7]
    [PDF] Design and Implementation of Software Radios Using a
    4-6 Circular buffer ... Now, however, applications such as audio mixers and video players are run as applications under full-featured operating systems (e.g. ...
  8. [8]
    15. Stacks and Queues | CS 2110
    A circular buffer is a data structure that stores its elements at contiguous indices with the added condition that we view the last index as the immediate ...
  9. [9]
    Ring Buffer Basics - Embedded
    Aug 7, 2013 · The ring buffer (also known as a circular buffer, circular queue, or cyclic buffer) is a circular software queue. This queue has a first-in- ...
  10. [10]
    Meet LEO, the world's first business computer - Science Museum
    Nov 9, 2018 · What is the LEO computer? A pioneering electronic computer, built 1949–1951, to manage Lyons tea shop orders; Based on electronic digital ...Missing: circular buffer
  11. [11]
    [PDF] Lecture 15 - Queues - Williams College
    Pseudocode: Enqueue and Dequeue in an Array with Circular-Indexing. Below is pseudocode for adding and removing an element from a queue using an array in which.Missing: buffer | Show results with:buffer
  12. [12]
    [PDF] Data Structures and Algorithms - Computer Science
    Store queue elements in a circular array. Maintain the index of the first element (head) and the next location to be inserted (tail). Enqueue: data[tail] = elem ...<|control11|><|separator|>
  13. [13]
    Circular Buffer | Baeldung on Computer Science
    Mar 18, 2024 · A circular buffer is an array of constant length, and we use it to store data in a continuous loop. It is also known as a ring buffer because it stores the ...
  14. [14]
    Creating a Circular Buffer in C and C++ - Embedded Artistry
    May 17, 2017 · Circular buffers (also known as ring buffers) are fixed-size buffers that work as if the memory is contiguous & circular in nature.
  15. [15]
    Circular Buffers - The Linux Kernel documentation
    A circular buffer is a fixed-size buffer with a head index for producers and a tail index for consumers. The indices wrap, allowing infinite data flow.
  16. [16]
    Array-Based Queues
    In C++, we could implement an array-based queue as a class. To conserve space, we'll implement it as a "circular queue", an array in which the last position is ...
  17. [17]
    Implementing a Queue using a circular array - Emory CS
    A circular buffer has 2 "pointers" (or indices): · Read pointer: (or read position). Read pointer = the index of the circular array used by a read operation.
  18. [18]
    [PDF] Implementing Queues
    Feb 13, 2013 · The data structure described on the preceding slide is called a ring buffer because the end of the array is linked back to the beginning.
  19. [19]
    C Program to Implement Circular Queue - GeeksforGeeks
    Jul 23, 2025 · In this article, we will learn how to implement circular queue in C programming language.
  20. [20]
    Implementing a Ring Buffer in Java | Baeldung
    Jun 27, 2020 · A Ring Buffer is implemented using a fixed-size array that wraps around at the boundaries. Apart from the array, it keeps track of three things:.
  21. [21]
    22C:116, Lecture 6, Spring 2002 - University of Iowa
    Exercise: Propose other solutions to detecting the full and empty conditions in a circular buffer, perhaps by counting the data elements in the buffer.
  22. [22]
    Lecture 07 – Ch3 Ch4 Implementations Data Flow Implementation in ...
    Oct 14, 2024 · Option 1 : flag bit: whenever, the read and write pointer become equal, a flag is used to record which state (full/empty) the queue is in ...Missing: detecting | Show results with:detecting
  23. [23]
    [PDF] Bitstream Buffer in Audio Coprocessor - Texas Instruments
    Jul 6, 2008 · By use of data breakpoints, a pseudo circular buffer is implemented for handling the compressed audio bitstream. Thus eliminating the need for ...
  24. [24]
    Chapter 33. Tuning the network performance | 9
    As the name implies, the ring buffer is a circular buffer where an overflow overwrites existing data. There are two ways to move data from the NIC to the kernel ...<|control11|><|separator|>
  25. [25]
    Understanding and Tuning the tx-ring-limit Value - Cisco
    Dec 14, 2007 · Cisco IOS and interface controllers use these rings to control which buffers are used to receive and transmit packets to the media. The rings ...
  26. [26]
    dmesg: show log events from kernel ring buffer - Linux Audit
    The dmesg command shows available Linux kernel log entries from the kernel ring buffer, which include events about the boot, hardware, and processes.
  27. [27]
    [PDF] Using Ring Buffer Logging to Help Find Bugs
    The solutions revolve around the use of a ring buffer to log key events in the life of a system. A ring buffer is a buffer with a fixed size.
  28. [28]
    Fence-Based Resource Management - Win32 apps - Microsoft Learn
    Dec 30, 2021 · A ring buffer is one way to manage an upload heap. The ring buffer holds data required for the next few frames. The app maintains a current data ...
  29. [29]
    LMAX Disruptor User Guide
    Introduction. The Disruptor is a library that provides a concurrent ring buffer data structure. It is designed to provide a low-latency, high-throughput ...Introduction · Getting Started · Basic Produce And Consume
  30. [30]
    Concurrency with LMAX Disruptor - An Introduction | Baeldung
    Jan 9, 2024 · This article introduces the LMAX Disruptor and talks about how it helps to achieve software concurrency with low latency. We will also see a ...
  31. [31]
    FreeRTOS stream & message buffers
    Stream buffers allow a stream of bytes to be passed from an interrupt service routine to a task, or from one task to another task.
  32. [32]
    [PDF] Digital Signal Processors
    Jan 28, 2012 · ▫ DSPs generally have an “infinite” continuous data stream. ▫ Most ... – To save memory, buffer is often organized as a circular buffer.
  33. [33]
    [PDF] REAL-TIME DSP LABORATORY4: - Colorado State University
    4.8 Circular Buffers on the TMS320C6713. The most efficient way to code FIR filters is to use circular buffers. The idea of circular buffers is that once you ...
  34. [34]
  35. [35]
    [PDF] design of an automotive iot device to improve driver fault detection ...
    contents of the circular buffers. The primary circular buffer used is either CAN speed or GPS speed depending on the decoding status of the vehicle bus.
  36. [36]
    [PDF] Programmer's Guide - DPDK
    Apr 12, 2016 · Each figure represents a simplified state of the ring, which is a circular buffer. ... buffers may sit idle on some core's cache, the speed at.
  37. [37]
    [PDF] BlockFIFO & MultiFIFO: Scalable Relaxed Queues - arXiv
    Jul 30, 2025 · Sequential implementations using circular arrays are fast and cache-efficient, offering high throughput with constant-time operations. As most ...
  38. [38]
    [PDF] A Scalable, Portable, and Memory-Efficient Lock-Free FIFO Queue
    Aug 13, 2019 · Contributions of the paper. • We introduce an approach of building ring buffers using indirection and two queues. • We present our scalable ...
  39. [39]
    [PDF] A Scalable, Portable, and Memory-Efficient Lock-Free FIFO Queue
    Our evaluation validates that our queue has exceptional memory efficiency compared to other algorithms and its performance is often comparable to, or exceeding ...<|separator|>
  40. [40]
    Circular Buffer: A Critical Element of Digital Signal Processors
    Nov 13, 2017 · This article discusses circular buffering, which allows us to significantly accelerate the data transfer in a real-time system.Missing: characteristics computer<|control11|><|separator|>
  41. [41]
    Circular Buffer - an overview | ScienceDirect Topics
    A circular buffer is a utility used to transfer successive data values from a producer thread to a consumer thread, who retrieves the data in FIFO (first in ...Missing: characteristics | Show results with:characteristics
  42. [42]
    of a lock-free ring-buffer with contiguous reservations
    Jun 3, 2019 · A BipBuffer is a bi-partite circular buffer that always supports writing a contiguous chunk of data, instead of potentially splitting a write in two chunks.
  43. [43]
    The I/O Subsystem - CS, FSU
    Circular Buffering. Notice that time spent copying data from the OS buffer to the user buffer can be saved if we use memory mapping.
  44. [44]
    Notes On Concurrent Ring Buffer Queue Mechanics
    Apr 21, 2014 · This is an important property of concurrent queues that elements are not made visible to other threads before preceding writes to their contents.
  45. [45]
    Chapter 7. Boost.Circular Buffer
    ### Summary of Boost.CircularBuffer Resizing and Capacity
  46. [46]
    ArrayBlockingQueue (Java Platform SE 8 ) - Oracle Help Center
    A bounded blocking queue backed by an array. This queue orders elements FIFO (first-in-first-out). The head of the queue is that element that has been on the ...Missing: approximation | Show results with:approximation
  47. [47]
    Dissecting the Disruptor: Wiring up the dependencies - Trisha Gee
    Jul 10, 2011 · To manage the dependencies between the consumers you need two consumer barriers. The first just talks to the ring buffer and consumers one and two ask it for ...Missing: circular | Show results with:circular
  48. [48]
    LMAX Disruptor: High performance alternative to bounded queues ...
    The Disruptor is a general-purpose mechanism that solves a complex problem in concurrent programming in a way that maximizes performance, and that is simple to ...The Complexities of... · Design of the LMAX Disruptor · Teasing Apart the Concerns