Fact-checked by Grok 2 weeks ago

Data buffer

A data buffer is a temporary of physical used to hold data during transfer between components in a computer system, compensating for differences in processing speeds or data transfer rates between devices such as hardware and the . This storage mechanism ensures smooth data flow by allowing faster components to proceed without waiting for slower ones, preventing bottlenecks in operations like reading from disks or transmitting over networks. In operating systems, data buffers are integral to I/O management, where they address mismatches in device transfer sizes and enable efficient handling of asynchronous operations. Common implementations include single buffering, which uses one buffer for sequential data staging; double buffering, employing two alternating buffers to overlap and I/O for improved throughput, as seen in rendering; and circular buffering, a queue-like structure of multiple buffers that cycles continuously for like audio or video. Buffers also play critical roles in networking, where they fragment large messages into packets for transmission and reassemble them at the destination, and in process synchronization, such as the producer-consumer problem, to coordinate data access between threads. Beyond I/O, data buffers appear in database management systems as part of buffer pools—collections of fixed-size frames that disk blocks to minimize physical reads. While buffers enhance , improper management can lead to issues like overflows, where excess data corrupts adjacent , though modern systems mitigate this through bounds checking and secure coding practices. Overall, data buffers form a foundational element in , enabling reliable and efficient data handling across software and hardware layers.

Fundamentals

Definition

A data buffer is a region of physical or virtual memory that serves as temporary storage for data during its transfer between two locations, devices, or processes, primarily to compensate for differences in data flow rates, event timing, or data handling capacities between the involved components. This mechanism allows the source and destination to operate at their optimal speeds without synchronization issues arising from mismatched rates or latency. Key attributes of data buffers include their size, which can be fixed (pre-allocated to a specific capacity) or variable (dynamically adjustable based on needs), and their storage medium, which is typically volatile such as RAM for high-speed operations but can be non-volatile like disk for longer-term holding in certain scenarios. Buffers also support various access models, ranging from single-producer/single-consumer patterns in simple producer-consumer setups to multi-producer/multi-consumer configurations in more complex concurrent environments. These attributes enable buffers to adapt to diverse system requirements while maintaining data integrity during transit. Unlike caches, which are optimized for repeated, frequent access to based on locality principles to reduce average access time, data buffers emphasize transient holding specifically for or streaming operations without inherent optimization for reuse. Similarly, while queues are data structures that enforce a particular ordering (typically first-in, first-out), data buffers do not inherently impose such ordering unless explicitly designed to do so, focusing instead on raw temporary storage capacity.

Purpose and Characteristics

Data buffers primarily serve to compensate for discrepancies in processing speeds between data producers and consumers, such as between a fast and slower disk I/O operations, allowing the faster component to continue working without waiting. They also smooth out bursty data flows by temporarily holding variable-rate inputs, preventing disruptions in continuous processing pipelines. Additionally, buffers reduce blocking in concurrent systems by decoupling producer and consumer activities, enabling asynchronous operations that minimize idle time. Key characteristics of data buffers include their temporality, as data within them is typically overwritten or discarded once consumed, distinguishing them from persistent . Buffer sizes are determined based on factors like transfer sizes from devices or requirements for hiding , often ranging from small units like 4 for copies to larger allocations such as 4 MB for disk caches to optimize efficiency. In terms of performance impact, buffering decreases the frequency of context switches and I/O interruptions, thereby enhancing overall system responsiveness. The benefits of buffers encompass improved throughput, as seen in disk caching scenarios achieving rates up to 24.4 MB/sec compared to unbuffered access, and reduced in data pipelines through techniques like read-ahead buffering. They also provide in data streams by maintaining temporary copies that support recovery from transient failures without permanent loss. In the basic producer-consumer model, a deposits into the while the retrieves it, coordinated by primitives such as semaphores to manage access and avoid conflicts.

Types

Linear Buffers

A linear buffer consists of a contiguous of that is accessed sequentially from the beginning to the end, making it suitable for one-time transfers where elements are written and read in a straight-line order without looping back. This structure relies on a fixed-size allocation, typically implemented as an or similar , to hold temporary during operations like reading from or writing to devices. The mechanics of a linear buffer involve head and tail pointers that advance linearly as is enqueued or dequeued; the tail pointer tracks the for inserting new , while the head pointer indicates the location of the next item to be removed. When the buffer becomes full, further writes may result in unless the buffer is reset by moving pointers back to the start or reallocated to a larger contiguous region; similarly, upon emptying, the pointers reach the end and require reinitialization for reuse. Buffers like this generally facilitate rate matching between data producers and consumers, such as in device I/O where transfer speeds differ. Linear buffers find unique application in of files, where large datasets are loaded sequentially into memory for ordered processing without subsequent reuse, or in simple I/O that handle discrete, non-recurring transfers like reading configuration files. In these scenarios, the sequential nature ensures straightforward handling of in a single pass, avoiding the complexity of more . One key advantage of linear buffers is their simplicity in implementation, requiring only basic pointer arithmetic and contiguous allocation, which minimizes complexity and effort. They also impose low runtime overhead for short-lived operations, as there is no need for or additional logic to manage wrapping. Despite these benefits, linear buffers exhibit limitations in efficiency for continuous data streams, as reaching the end necessitates frequent resets or reallocations, potentially causing performance bottlenecks through repeated operations or temporary data halts. This makes them less ideal for scenarios demanding persistent, high-throughput data flow without interruptions.

Circular Buffers

A , also known as a ring buffer, is a fixed-size that uses a single as if it were connected end-to-end, enabling read and write pointers to wrap around to the beginning upon reaching the end. This FIFO-oriented design facilitates the continuous handling of data streams by overwriting the oldest entries once the buffer is full, without requiring data relocation or buffer resizing. The mechanics of a circular buffer rely on two pointers—one for the write position () and one for the read position (head)—managed through modulo arithmetic to compute effective indices within the fixed . For instance, a write places at buffer[(tail % size)] = [data](/page/Data), incrementing tail afterward, while reads use buffer[(head % size)] before advancing head. To distinguish a full from an empty one (where both pointers coincide), a common approach reserves one slot unused, yielding an effective capacity of size - 1; the is empty when head == tail and full when (tail + 1) % size == head. This structure offers constant-time O(1) operations for insertion and removal, eliminating the need to shift elements as in linear buffers, which enhances efficiency for like audio queues. Unlike linear buffers that cease operation upon filling and require reallocation, circular buffers support ongoing reuse through wrapping, optimizing memory in resource-constrained environments. Circular buffers emerged as an efficient queueing mechanism in early systems, with the concept documented in Donald Knuth's (Volume 1), and remain prevalent in devices for handling asynchronous transfers.

Double Buffers

Double buffering, also known as ping-pong buffering, is a technique that employs two distinct buffer regions to facilitate seamless transfer in producer-consumer scenarios, where one buffer is filled with while the other is simultaneously consumed or processed. This approach allows the producer (e.g., a like a disk or input ) to write to the inactive buffer without interrupting the consumer (e.g., a unit or ), ensuring continuous operation. The mechanics of double buffering involve alternating between the two buffers through a swap operation, typically implemented via pointer exchange or status flags that indicate which buffer is active for reading or writing. This swap occurs at designated synchronization points to prevent , often employing operations or interrupt-driven queues to coordinate access between producers and consumers, particularly in environments where processing times for filling and consuming vary significantly. By these operations, double buffering maintains a steady data flow, avoiding stalls that would arise from waiting for one buffer to complete its cycle. A key advantage of double buffering is its ability to hide the latency of data preparation or transfer behind ongoing consumption, effectively doubling the throughput in pipelined systems by overlapping I/O and computation. This masking is especially beneficial in scenarios with mismatched speeds between data sources and sinks. It is commonly applied in graphics rendering, where front and back buffers alternate to complete frames without —rendering occurs in the back buffer while the front buffer is shown, followed by a swap. Similarly, in disk I/O operations, it enables efficient block s by allowing one buffer to receive from storage while the other is processed by the CPU.

Management and Implementation

Allocation Strategies

Allocation strategies for data buffers determine how memory is assigned to these temporary storage areas, balancing efficiency, predictability, and adaptability to varying workloads. Three primary approaches are employed: static, dynamic, and pool allocation. Static allocation assigns a fixed-size block of at , which remains constant throughout program execution. This method is particularly suited for embedded systems where memory constraints are tight and requirements are predictable, as it eliminates overhead and ensures deterministic performance. In C, this can be implemented using fixed arrays declared globally or locally, while in C++, it involves stack-based s or static members. However, static allocation lacks flexibility for handling data rates in buffers, potentially leading to wasted space if the fixed size exceeds actual needs. Dynamic allocation, in contrast, requests memory at runtime using functions like malloc and free in C or new and delete in C++, allowing buffers to resize based on immediate demands. This approach is ideal for applications with fluctuating data volumes, such as general-purpose computing tasks, but it introduces overhead from allocation calls and potential delays due to heap management. Trade-offs include higher memory footprint from metadata and the risk of exhaustion under heavy loads, though it provides greater adaptability than static methods. Pool allocation pre-allocates a collection of fixed-size blocks from which buffers can be quickly drawn and returned, minimizing repeated interactions and reducing fragmentation. This strategy reuses from dedicated pools tailored to specific buffer sizes, enhancing performance in high-frequency allocation scenarios like object caching. Key considerations in all strategies include managing to avoid excess usage and preventing fragmentation; for instance, buddy systems allocate power-of-two sized blocks to merge adjacent free spaces efficiently, thereby mitigating external fragmentation. Additionally, aligning buffers to hardware boundaries—such as 16-byte or 64-byte multiples—optimizes access speeds by enabling efficient SIMD instructions and transfers. In operating system kernels, slab allocators extend pool concepts by maintaining caches of initialized objects, including buffer pools for network packets, to accelerate allocation and reduce initialization costs. Overall, static allocation offers predictability at the cost of inflexibility, while dynamic and methods provide but require careful management to prevent resource exhaustion.

Overflow and Error Handling

Buffer overflow occurs when a program attempts to write more data to a buffer than its allocated capacity, potentially overwriting adjacent locations and leading to or vulnerabilities. This condition arises from insufficient bounds checking during data input operations, such as in functions that copy strings or arrays without verifying lengths. A prominent type of buffer overflow is the stack-based buffer overflow, often exploited through techniques like stack smashing, where malicious code is injected to alter and execute arbitrary instructions. Buffer underflow, conversely, happens when a program reads from a buffer that lacks sufficient , typically because is consumed faster than it is produced, resulting in attempts to uninitialized or invalid memory. This can cause program crashes, , or in some cases, issues if it exposes sensitive beyond the buffer's intended bounds. Common handling strategies for overflows include bounds checking to validate input sizes before writing, truncation of excess data to fit the buffer, blocking the operation until space is available, or dropping incoming data to prevent corruption. In software implementations, assertions can halt execution upon detecting an overflow, while exceptions in languages like C++ or Java provide a mechanism to signal and recover from the error gracefully. For underflows, similar checks ensure sufficient data exists before reading, often triggering waits or error returns. A notable real-world example of a buffer over-read vulnerability is the bug (CVE-2014-0160), disclosed in 2014, which affected the cryptography library and allowed attackers to read up to 64 kilobytes of server memory per request due to a missing length validation in the heartbeat extension. This vulnerability compromised private keys, passwords, and session cookies across numerous systems, highlighting the risks of unchecked buffer operations in widely used software. Mitigation techniques include stack canaries, which insert random sentinel values between the buffer and critical stack data like return addresses; any overflow corrupts the canary, detectable before function return. (ASLR) randomizes memory addresses to make exploitation harder by complicating return-to-libc or similar attacks. These defenses, often enabled by compilers like , reduce vulnerability without fully eliminating the need for secure coding practices. In circular buffers, overflow handling typically follows a circular queue policy where, upon reaching capacity, new data overwrites the oldest entries, ensuring continuous operation without halting the producer. This approach prioritizes recent data retention, common in real-time systems like audio processing, but requires consumers to track valid data ranges to avoid reading stale information. Implementing checks, such as bounds validation, incurs a overhead due to runtime verifications, though optimizations like compiler-assisted checks can mitigate this impact.

Applications

In Computing Systems

In operating systems, data buffers are essential for managing (I/O) operations in file systems, where they data in to bridge the gap between fast processors and slower storage devices. For example, employs a to store file contents temporarily in , enabling subsequent reads and writes to be served directly from rather than accessing the disk each time, which significantly improves application . This caching mechanism aligns I/O operations with the file's , typically performing transfers at the page level to minimize overhead. Buffers also support (IPC) by providing temporary storage for exchange between processes. offer a unidirectional where the buffers in a fixed-size , ensuring writes up to the PIPE_BUF limit of 4096 bytes to prevent interleaving in concurrent scenarios. , in contrast, creates a directly accessible region in the that multiple processes can map and use for high-speed without repeated copies. Disk buffering specifically aggregates small, scattered writes into larger contiguous blocks before flushing to storage, which reduces the number of mechanical seeks on hard disk drives (HDDs) and enhances write efficiency. In CPU , instruction buffers—implemented as registers between stages—hold fetched instructions to facilitate overlapping execution across multiple pipeline phases, thereby increasing instruction throughput. In systems, buffers like the interact with swap space by allowing less frequently used pages to be swapped out to disk when physical is under pressure, freeing for active processes while preserving . Double buffering further aids concurrency in multithreaded applications, where two buffers alternate roles—one for writing by a producer thread and one for reading by a consumer—reducing contention and enabling operations without locks. Overall, buffering mitigates mechanical delays in HDDs, such as times averaging several milliseconds, by batching and prefetching in standard 4KB blocks that match common and sizes. In shared environments, buffer overflows pose risks of if bounds are not enforced.

In Networking

In networking, data buffers play a critical role in managing the transmission of packets across interconnected devices, particularly in handling variability in arrival times and rates to prevent and ensure reliable delivery. Packet buffers in routers temporarily store incoming packets when output links are congested, allowing for orderly forwarding and mitigating immediate drops. For instance, in implementations, receive windows utilize buffers to hold acknowledged data segments, enabling the receiver to control the flow from the sender based on available memory. Similarly, in (VoIP) systems compensate for packet delay variations by queuing arriving packets and releasing them at a steady rate, thus smoothing out network-induced jitter to maintain audio quality without perceptible disruptions. At the level, buffering occurs prominently in the OSI model's layer 2 () and layer 3 () to support control mechanisms. Layer 2 switches and bridges use buffers to manage queuing during link-layer retransmissions, while layer 3 routers employ them for handling amid traffic bursts. control in these layers often involves (AQM) techniques to signal impending via packet drops or markings, preventing widespread instability. Queueing disciplines further refine this process; for example, First-In-First-Out () treats all packets equally in a single queue, suitable for simple environments, whereas priority queueing assigns higher precedence to latency-sensitive traffic, as implemented in routers to favor VoIP or signaling packets over bulk data. TCP's relies on to implement flow control, where the receiver advertises its available space in size announcements, limiting the sender's unacknowledged data to avoid overwhelming the . This mechanism dynamically adjusts transmission rates based on occupancy, ensuring end-to-end reliability without explicit . However, excessive buffering in network devices has led to the problem, where large queues accumulate packets during , inflating —sometimes to seconds—despite high throughput, a issue prominently addressed in networking communities starting around through AQM algorithms like (Proportional Integral controller Enhanced). Deep packet inspection (DPI) processes, used in firewalls and intrusion detection systems, demand substantial buffer capacities to reassemble and analyze fragmented or out-of-order packet streams for signatures, enabling threat detection without dropping legitimate traffic. In contrast, networks prioritize low-latency applications by employing smaller, more efficient buffers with dynamic sizing at the (RLC) layer, often splitting responsibilities between RLC and (PDCP) layers to minimize queuing delays while supporting ultra-reliable low-latency communications (URLLC). Circular buffers are occasionally referenced in packet queue implementations for their efficiency in handling continuous streams without frequent reallocations.

In Multimedia Processing

In multimedia processing, data buffers play a crucial role in managing time-sensitive media streams, such as audio, video, and graphics, to ensure smooth playback and rendering without interruptions. Frame buffers in graphics processing units (GPUs) store data for rendered images, allowing the GPU to compose and update visual content efficiently before displaying it on screen. Similarly, audio buffers in sound cards hold samples of data, preventing glitches by compensating for variations in processing speed between the CPU and audio hardware. A key application is double buffering in graphics APIs like , where two buffers—one front (displayed) and one back (rendered to)—alternate to eliminate during updates. This technique synchronizes rendering with the display , producing tear-free visuals in applications such as games and animations. In video streaming, adaptive buffering dynamically adjusts buffer sizes based on available ; for instance, employs algorithms that monitor network conditions to scale video quality and buffer depth, minimizing rebuffering events while maintaining continuous playback. In processing, buffers typically hold 512 samples at a 44.1 kHz sample rate, corresponding to approximately 11.6 milliseconds of audio, which balances low with CPU in digital audio workstations. If the buffer underruns—meaning it empties before new data arrives—audible artifacts like pops or clicks occur due to incomplete sample delivery to the . Modern advancements include AI-accelerated buffering in video codecs like , which optimizes real-time transcoding by predicting and pre-fetching data segments to reduce in scenarios. These buffers smooth rate variations between encoding, transmission, and decoding, enabling high-quality playback even under fluctuating conditions.

Historical Development

Origins in Early

The concept of data buffering in arose during the and 1950s as electronic computers transitioned from experimental machines to practical systems, addressing the significant speed disparities between rapid central processing units (CPUs) and slower electromechanical peripherals such as punch card readers and drives. These early buffers functioned as temporary storage areas to hold data from mechanical input devices, mitigating delays caused by their physical limitations—punch card readers, for instance, processed cards at rates of around 100 to 200 per minute, far slower than emerging CPU cycle times. By staging data in memory, buffering allowed CPUs to proceed with computations without idling, marking a foundational for efficient (I/O) management in pre-operating system era machines. A pivotal implementation occurred with the , the first commercial general-purpose electronic computer delivered in 1951, which incorporated dedicated tape buffers for data staging and overlapped I/O operations. The system featured two 60-word buffers—one for input and one for output—integrated with its UNISERVO tape drives, enabling asynchronous data transfer from while the CPU executed instructions. This design represented the earliest commercial example of buffered I/O, allowing the UNIVAC I to handle business and scientific workloads by decoupling tape read/write speeds (up to 7,200 characters per second) from the CPU's processing rate, thus reducing overall job turnaround times in environments reliant on offline data preparation. The UNIVAC I's buffering approach was essential for its role in high-profile applications, such as the 1952 U.S. presidential election prediction. Throughout the 1950s, batch processing systems further entrenched buffering practices, using emerging core memory as dedicated buffers to manage sequential job execution and data flow. These systems, common in installations like those employing or 650 computers, grouped programs and data into batches processed offline via punched cards or tape, with core memory—tiny magnetic rings invented around 1951—serving as high-speed buffers to temporarily hold input data and intermediate results. This buffered approach minimized CPU downtime during I/O waits, supporting the era's unit-record processing paradigms where entire decks of cards were read into core before computation began. Core memory's non-volatile nature and access times under 10 microseconds made it ideal for such buffering, enabling efficient handling of business data like or in resource-constrained environments. The terminology "buffer" itself was adapted from , where it described circuits introduced in the 1920s to match impedances between signal sources and loads, preventing reflections and signal degradation in early radio and systems. By the , this analogy extended to , portraying areas as "cushions" that isolated fast computational elements from slower mechanisms, a conceptual shift that underscored buffering's role in system stability.

Evolution in Modern Systems

In the and , advancements in operating systems integrated data buffers more deeply into architectures to support efficient operations. Multics, developed starting in 1965, influenced subsequent systems by employing sophisticated buffering mechanisms for its hierarchical , enabling high-performance multitasking and data access. Unix, emerging in the early , adopted similar buffers to manage file blocks and inodes, chaining them to optimize I/O throughput in environments. Paralleling these developments, the in the utilized packet buffers to mitigate and contention in early packet-switched networks, where preempting buffers was a key design consideration for reliability. The 1980s and 1990s saw data buffers evolve with the rise of graphical user interfaces and networked computing. Early systems, such as the introduced in 1973, pioneered frame buffering for bit-mapped displays, laying groundwork for double buffering techniques to eliminate flicker during rendering, which became widespread in commercial GUIs like Windows by the late 1980s. In networking, the TCP/IP protocol suite, standardized in the 1980s, incorporated buffer management for flow control and congestion avoidance, addressing challenges like packet reassembly and queue delays in growing internetworks. Circular buffers, an efficient ring-based structure for continuous data streams, became widely adopted in systems during the 1980s for handling . From the onward, optimizations focused on reducing overhead and enhancing performance in storage and I/O. introduced zero-copy buffering with the splice() in 2006, allowing direct data transfer between pipes without user-kernel copies, significantly improving throughput for file and network operations. Solid-state drives (SSDs), proliferating in the mid-, incorporated buffers to cache writes and support , distributing erase cycles evenly across flash cells to extend device lifespan. Post-2010 developments addressed scalability, security, and in distributed and consumer environments. Cloud storage systems like AWS S3 employed multipart upload buffering to handle large objects by dividing them into parts, enabling transfers and fault-tolerant . Emerging AI techniques began applying for predictive buffer allocation, using models to anticipate I/O patterns and dynamically adjust sizes in workloads. In home networking, —excessive queuing delays in routers—drove innovations like from 2012, which drops packets based on delay thresholds to mitigate spikes without sacrificing throughput. The 2014 Heartbleed vulnerability, a buffer over-read in , heightened focus on secure buffer handling, prompting widespread audits and mitigations in cryptographic libraries to prevent memory leaks.

References

  1. [1]
    Operating systems
    Buffering - store data in memory while transferring between devices o To cope with device speed mismatch o To cope with device transfer size mismatch o ...<|control11|><|separator|>
  2. [2]
    Operating Systems: I/O Systems
    Buffers are used in particular in networking systems to break messages up into smaller packets for transfer, and then for re-assembly at the receiving side. To ...
  3. [3]
    [PDF] Buffer Management
    Problems with OS buffer management. Stonebraker. “Operating System Support for Database Management.” CACM, 1981 (in red book). ❖ Performance problems.
  4. [4]
    what is the difference between buffering and queuing?
    Jul 27, 2017 · A FIFO buffer is often called a queue, but a queue can be a more complex data structure with changing order, while a buffer is a FIFO device.
  5. [5]
    None
    Below is a merged summary of data buffering in operating systems based on the provided segments from Andrew S. Tanenbaum's *Modern Operating Systems* (4th Ed.) and related content. To retain all information in a dense and comprehensive format, I will use a combination of narrative text for an overview and tables in CSV format for detailed characteristics, relevant sections, and URLs. This approach ensures all details are preserved while maintaining readability and structure.
  6. [6]
    [PDF] Discretized Streams: Fault-Tolerant Streaming Computation at Scale
    Spark Streaming are far shorter, usually just 50–200 ms, due to running on in-memory RDDs. All state in Spark Streaming is stored in fault-tolerant data ...
  7. [7]
    Linear buffers - BrainKart
    Jan 2, 2017 · Linear buffer is a generic reference to many buffers that are created with a single piece of linear contiguous memory that is controlled by pointers.<|control11|><|separator|>
  8. [8]
    I/O buffering and its Various Techniques - GeeksforGeeks
    Jul 12, 2025 · I/O buffering is a technique used in computers to manage data transfer between the computer's memory and input/output devices (like hard drives, printers, or ...
  9. [9]
    Circular Buffer - an overview | ScienceDirect Topics
    In a linear buffer, when either pointer reaches the end of the buffer it ... Looking for the necessary amount of contiguous memory space to create a ...
  10. [10]
    Buffering in OS - GeeksforGeeks
    Jul 23, 2025 · Buffering is a process in which the data is stored in a buffer or cache, which makes this stored data more accessible than the original source.
  11. [11]
    circular buffer data structure
    Circular buffer = a data structure that uses an array as if it were connected "end-to-end". Schematically: This data structure is also known as: ...Missing: fundamentals | Show results with:fundamentals
  12. [12]
    Ring Buffer Basics - Embedded
    Aug 7, 2013 · The ring buffer's first-in first-out data structure is useful tool for transmitting data between asynchronous processes.Missing: invention science
  13. [13]
    Synchronization: Deadlock, Bounded Buffer - Brown CS
    To make the buffer circular, we perform all index arithmetic modulo the total capacity of the buffer. When there is just one thread accessing the buffer, it ...
  14. [14]
    Advantages of circular queue over linear queue - GeeksforGeeks
    Jul 23, 2025 · Circular queues allow easier insertion/deletion, efficient memory use, and flexible insertion/deletion order, unlike linear queues.
  15. [15]
    Definition of double buffering - PCMag
    A programming technique that uses two buffers to speed up a computer that can overlap I/O with processing. Data in one buffer are being processed while the ...
  16. [16]
    Operating Systems, Lecture 12 - University of Iowa
    Few programs on such systems use more than two buffers; as a result, the term double buffering has come to be widely used to describe schemes which allow ...An Example Device · Critical Sections · Queues Revisited<|separator|>
  17. [17]
    Double Buffering - UTK-EECS
    A common technique used to avoid flicker in windows is to draw objects to an offscreen buffer, called a double buffer, and then blit the buffer to the screen.
  18. [18]
    Dynamic Memory Allocation - an overview | ScienceDirect Topics
    Dynamic memory allocation is defined as the process of allocating memory at variable addresses during program execution, allowing for flexible memory usage.
  19. [19]
    How to Allocate Dynamic Memory Safely - Barr Group
    May 4, 2016 · The mechanisms include statically allocating all memory, using one or more stacks, and using a heap. We will examine how the heap implementation ...
  20. [20]
  21. [21]
    [PDF] Dynamic Memory Optimization using Pool Allocation and Prefetching
    In this paper, we describe a dynamic heap allocation scheme called pool al- location. The strategy aims to improve cache performance by inspecting memory ...
  22. [22]
    [PDF] Fast Allocation and Deallocation with an Improved Buddy System∗
    Abstract. We propose several modifications to the binary buddy system for managing dynamic allocation of memory blocks whose sizes are powers of two.
  23. [23]
    Data alignment: Straighten up and fly right - IBM Developer
    Feb 8, 2005 · Data alignment is an important issue for all programmers who directly use memory. Data alignment affects how well your software performs, ...<|control11|><|separator|>
  24. [24]
    [PDF] The Slab Allocator: An Object-Caching Kernel Memory Allocator
    4.3.​​ The slab allocator incorporates a simple coloring scheme that distributes buffers evenly throughout the cache, resulting in excellent cache utilization ...
  25. [25]
    Slab Allocator - The Linux Kernel Archives
    The basic idea behind the slab allocator is to have caches of commonly used objects kept in an initialised state available for use by the kernel.
  26. [26]
    Buffer Overflow - OWASP Foundation
    A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold or when a program attempts to put data in a memory area ...
  27. [27]
    What is a buffer overflow? How do these types of attacks work?
    Mar 17, 2025 · A buffer overflow occurs when a program or process attempts to write more data to a fixed-length block of memory, or buffer, than the buffer is allocated to ...
  28. [28]
    What Is Buffer Overflow? Attacks, Types & Vulnerabilities | Fortinet
    Buffer overflow is a software coding error that enables hackers to exploit vulnerabilities, steal data, and gain unauthorized access to corporate systems.
  29. [29]
    What is buffer underflow? | Definition from TechTarget
    Nov 22, 2022 · A buffer underflow occurs when a buffer is fed information at a lower rate than it is being read. Learn about the issues this causes and how ...
  30. [30]
    CWE-124: Buffer Underwrite ('Buffer Underflow') - MITRE Corporation
    Buffer underflow from an all-whitespace string, which causes a counter to be decremented before the buffer while looking for a non-whitespace character. CVE- ...
  31. [31]
    What is a Buffer Overflow? - Portnox
    Proper mitigation techniques, such as bounds checking, input validation, and safe coding practices, are essential to protect systems from these vulnerabilities.
  32. [32]
    Buffer Overflow Attacks: Detection, Prevention & Mitigation
    Mar 31, 2024 · Checking the value of the canary against its original value can determine whether a buffer overflow has occurred.
  33. [33]
    Heartbleed Bug
    The Heartbleed bug is a vulnerability in OpenSSL that allows reading memory, compromising secret keys, user data, and content, related to the TLS heartbeat ...
  34. [34]
    OpenSSL 'Heartbleed' vulnerability (CVE-2014-0160) | CISA
    Oct 5, 2016 · This flaw allows an attacker to retrieve private memory of an application that uses the vulnerable OpenSSL library in chunks of 64k at a time.Missing: buffer overflow
  35. [35]
    Stack Canaries – Gingerly Sidestepping the Cage - SANS Institute
    Feb 4, 2021 · Buffer overflow vulnerabilities occur when no bounds checking is being done on buffer operations. Functions such as gets() and strcpy() do no ...
  36. [36]
    [PDF] Prevention and Detection of Stack Buffer Overflow Attacks
    Aug 12, 2005 · Any technique for buffer overflow detection is going to exact a performance cost on the system employing it. Depending on the technique ...Missing: impact | Show results with:impact
  37. [37]
    Circular Buffer | Baeldung on Computer Science
    Mar 18, 2024 · A circular buffer is an array of constant length, and we use it to store data in a continuous loop. It is also known as a ring buffer because it stores the ...
  38. [38]
    Creating a Circular Buffer in C and C++ - Embedded Artistry
    May 17, 2017 · Adding Data. The logic for put matches the C implementation. This implementation uses the “overwrite the oldest value” behavioral pattern.Missing: overflow | Show results with:overflow
  39. [39]
    Performance of Compiler-Assisted Memory Safety Checking
    Aug 25, 2014 · There is hope that automated buffer overflow checking will one day perform fast enough to work in future performance-critical systems. After ...
  40. [40]
    2. Supported File Operations - The Linux Kernel documentation
    Buffered I/O is the default file I/O path in Linux. File contents are cached in memory (“pagecache”) to satisfy reads and writes.
  41. [41]
    [PDF] Efficient Use of the Page Cache with 64 KB Pages
    Jul 19, 2006 · The pages within the page cache are aligned to the address space of the file, and I/O is typically performed at the page level.
  42. [42]
    Inter-process communication in Linux: Using pipes and message ...
    Apr 16, 2019 · The POSIX standard ensures that writes are not interleaved so long as no write exceeds PIPE_BUF bytes. On Linux systems, PIPE_BUF is 4,096 bytes ...
  43. [43]
    POSIX Interprocess Communication - Programming Interfaces Guide
    Shared memory allows processes to share parts of their virtual address space. ... POSIX shared memory is actually a variation of mapped memory (see Creating ...Missing: standards | Show results with:standards
  44. [44]
    [PDF] Reducing Seek Overhead with Application-Directed Prefetching
    Large buffer caches ensure that disk reads are only done when necessary, write buffering helps to batch and minimize writes, disk scheduling reorders disk ...
  45. [45]
    Pipelining - Stanford Computer Science
    Pipelining, a standard feature in RISC processors, is much like an assembly line. Because the processor works on different steps of the instruction at the same ...
  46. [46]
    Chapter 11 Swap Management - The Linux Kernel Archives
    Virtual memory and swap space allows a large process to run even if the process is only partially resident. As “old” pages may be swapped out, the amount of ...
  47. [47]
    Scorchers, Part 3: Bare-Metal Concurrency With Double-Buffering ...
    Jul 25, 2020 · This is a short article about one technique for communicating between asynchronous processes on bare-metal embedded systems.
  48. [48]
    [PDF] CS5460: Operating Systems
    Disk buffer cache. Eliminates problem. Aggregated disk I/O. Reduces seeks. Prefetching. Overlap/hide disk access. Disk head scheduling Reduces seeks. Disk ...
  49. [49]
    RFC 2309: Recommendations on Queue Management and ...
    ... packet bursts play in Internet performance. Even though TCP constrains a flow's window size, packets often arrive at routers in bursts [Leland94]. If the ...
  50. [50]
    [PDF] QoS: Congestion Management Configuration Guide, Cisco IOS ...
    FIFO queueing performs no prioritization of data packets on user data traffic. It entails no concept of priority or classes of traffic. When FIFO is used, ill- ...Missing: disciplines | Show results with:disciplines<|separator|>
  51. [51]
    draft-ietf-aqm-pie-01
    Recently, a new trend has emerged to control queueing latency directly to address the bufferbloat problem [CoDel]. Although following the new trend, PIE ...<|separator|>
  52. [52]
  53. [53]
    GPU Framebuffer Memory: Understanding Tiling | Samsung Developer
    Advantages of tile-based rendering. Frame buffer memory bandwidth is greatly reduced, reducing power and increasing speed. Mobile memory is typically slower ...
  54. [54]
  55. [55]
    Which Buffer Size Setting Should I Use in My DAW? - Sweetwater
    Oct 4, 2022 · The most common buffer size settings you'll find in a DAW are 32, 64, 128, 256, 512, and 1024. The most common audio sample rates are 44.1kHz or 48kHz.
  56. [56]
    Best Practices for Avoiding Underruns in Custom Audio Code
    Mar 15, 2024 · In the context of Unreal, insufficiently fast audio rendering leads to an issue called buffer underruns. As in most audio engines, Unreal ...
  57. [57]
    [PDF] Performance of AV1 Real-Time Mode - arXiv
    Sep 29, 2020 · This paper compares AV1, H.264, VP8, and VP9 in real-time, focusing on latency and throughput, and the difference between VOD and interactive ...
  58. [58]
    1951: Tape unit developed for data storage
    In 1951, Univac introduced the Uniservo 1, a tape drive using 0.5 inch tape, and in 1952, IBM announced its first magnetic tape storage unit.
  59. [59]
    [PDF] the univac - Bitsavers.org
    The Central Computer of the Univac System incorporates both buffers and the backward read feature. THE BUFFERS. Data to be written is transferred from its ...
  60. [60]
    History (1951): Univac I - StorageNewsletter
    Jun 18, 2018 · The UNIVAC I is the earliest commercial example of buffered/overlapped I/O for storage devices. Ten UNISERVO-I tapes drives (first commercial ...
  61. [61]
    [PDF] What is an Operating System? A historical investigation (1954–1964)
    Jan 30, 2019 · Another technological evolution was the introduction of buffer memory for the communication between input and output devices and the central pro ...
  62. [62]
    Buffer definition by The Linux Information Project (LINFO)
    Jul 7, 2005 · The original meaning of buffer is a cushion-like device that reduces the shock from the contact of two objects. A buffer in a computer system is ...Missing: history origin computing
  63. [63]
    The Mysterious 50 Ohm Impedance: Where It Came From and Why ...
    Mar 4, 2021 · The history of 50 Ohm impedance goes back to the late 1920s/early 1930s, when the telecom industry was in its infancy. Engineers were designing ...Missing: buffer | Show results with:buffer
  64. [64]
    A fast file system for UNIX - ACM Digital Library
    The UNIX operating system drew many of its ideas from Multics, a large, high performance operating system [3]. ... chain together kernel buffers. This ...
  65. [65]
    The network Unix system - ACM Digital Library
    These structures refer to standard Unix structures that already exist in the kernel; specifically as inodes, file blocks, and kernel buffers. ... Multics or ...
  66. [66]
    Issues in packet switching network design* - ACM Digital Library
    Preempt- ing packet buffers results in data loss; preempting internal ... Cerf, V., An Assessment of ARPANET. Protocols, RFC 635, NIC. 30489, April 1974 ...
  67. [67]
    [PDF] A Field Guide to Alto-Land
    All of our printers are built on top of Xerox copier printing engines that have been lobotomized and brainwashed to understand the babbling of an Alto instead ...
  68. [68]
    The design philosophy of the DARPA internet protocols
    The Internet protocol suite, TCP/IP, was first proposed fifteen years ago. It was developed by the Defense. Advanced Research Projects Agency (DARPA), and has.
  69. [69]
    [PDF] Technical Standard System Interfaces and Headers Issue 5
    Information Interchange ... POSIX-1. ISO/IEC 9945-1:1996, Information Technology — Portable Operating ...
  70. [70]
    Two new system calls: splice() and sync_file_range() - LWN.net
    Apr 3, 2006 · ... splice() implementation was ever created for the mainline Linux kernel. ... > Splice can even do zero-copy file copies. Wow! Copying without ...
  71. [71]
    Unified Wear-Leveling Technique for NVM-Based Buffer of SSD
    Jul 3, 2023 · ... (SSD) system design, studies to replace DRAM buffer with a nonvolatile. ... Related works on the wear-leveling of the SSD internal buffer are ...
  72. [72]
    Upload or download large files to and from Amazon S3 using an ...
    The following code examples show how to upload or download large files to and from Amazon S3. For more information, see Uploading an object using multipart ...
  73. [73]
    [PDF] I/O in Machine Learning Applications on HPC Systems - arXiv
    ML workloads on HPC systems have read-intensive I/O with many small files, unlike traditional HPC workloads which are write-intensive. This poses challenges to ...<|separator|>
  74. [74]
    Controlling Queue Delay
    May 6, 2012 · This article aims to provide part of the bufferbloat solution, proposing an innovative approach to AQM suitable for today's Internet called CoDel.Missing: mitigation post-
  75. [75]
    Analysis of SSL Certificate Reissues and Revocations in the Wake ...
    Mar 1, 2018 · In this paper, we use a widespread security vulnerability from 2014, Heartbleed, as a natural experiment: the moment Heartbleed was ...