Fact-checked by Grok 2 weeks ago

Bus mastering

Bus mastering is a capability in computer bus architectures that enables a connected device, known as a bus master, to independently seize control of the to initiate and manage (DMA) transactions, allowing data transfers between peripherals and memory without ongoing intervention from the (CPU). This mechanism contrasts with simpler bus systems where only the CPU acts as the master, requiring it to handle all communications and potentially creating bottlenecks. In operation, a bus master requests control of the bus through a dedicated signal, such as the REQ# line, to a central bus arbiter—often integrated into the host bridge or —which evaluates competing requests and grants access via a GNT# signal, ensuring fair allocation and preventing conflicts among multiple devices. Once granted, the master can perform burst transfers of multiple data units at high speeds, supporting protocols like those in the PCI bus, where it maintains ownership for sequential s to optimize throughput up to 133 MB/s in early implementations. This process includes address and command phases followed by data phases, with features like peer-to-peer communication enabling direct device-to-device interactions without routing through system memory. The concept traces its roots to early 1970s innovations, such as , which introduced for I/O devices to offload the CPU, evolving into more sophisticated in the 1980s with the IBM PC/AT's MASTER signal for single-channel DMA access. True bus mastering, with hardware-mediated , preemption, and fairness algorithms, was realized in 1987 with IBM's PS/2 (), which supported multiple masters and subsystem control blocks for enhanced efficiency. By the 1990s, Intel's standard popularized it across personal computers, replacing older buses like and EISA by providing plug-and-play compatibility, high bandwidth, and full bus mastering for graphics cards, network adapters, and storage controllers. Earlier precursors appeared in 1970s minicomputer designs, such as Intel's and the used in the 1975 , where devices could assert bus control for basic . Bus mastering remains fundamental to modern architectures like (PCIe), where it facilitates high-speed, low- transfers in endpoints such as GPUs and SSDs, reducing CPU overhead and enabling scalable system performance in servers, desktops, and embedded devices. Its advantages include minimized , increased overall throughput, and support for concurrent operations, though it requires robust to avoid bus contention and ensure system stability.

Overview

Definition

Bus mastering is a feature in computer bus architectures that enables a peripheral device to seize of the bus and independently initiate and manage transfers, thereby reducing the need for continuous oversight by the (CPU). This capability allows the device, known as the bus master, to generate addresses and control signals on the bus, facilitating efficient communication between peripherals and system memory or other devices. The primary components involved in bus mastering include the bus master, which takes temporary control to drive the bus; bus slaves, which are responsive devices that react to the master's requests by providing or receiving data; and the bus protocol, which governs the handover of mastership through defined signaling mechanisms. terminology encompasses "bus master" for the initiating controller, "bus slave" for the targeted responder, and "master-slave " for the process that resolves competing requests to ensure orderly bus usage. In contrast to traditional CPU-centric bus control, where the processor exclusively drives all addresses and control signals for every transaction, bus mastering empowers non-CPU devices to operate autonomously, offloading the CPU and improving overall system performance. This feature primarily enables applications such as (DMA), where peripherals can transfer data directly to or from without CPU intervention.

Relation to DMA

Direct Memory Access (DMA) is a hardware mechanism that enables peripheral devices to transfer data directly to or from main memory without continuous CPU intervention, thereby improving system efficiency by offloading I/O operations. Bus mastering serves as the bus-level protocol that empowers these devices to initiate and control such transfers autonomously by temporarily taking over the system bus from the CPU. This integration allows DMA to function beyond simple controller-mediated transfers, enabling more flexible and performant I/O in modern architectures. Bus mastering facilitates two primary types of DMA: first-party DMA, where the peripheral device itself acts as the bus master and directly drives the transfer cycles using its onboard logic; and third-party DMA, where a separate system-level DMA controller mediates the process, allocating bus channels while the device provides data but does not control the bus. In first-party DMA, the device integrates its own DMA engine, allowing it to request and acquire bus control independently, which is common in high-speed peripherals like graphics cards or storage controllers. Third-party DMA, by contrast, relies on a centralized controller (e.g., on the ), which handles and transfers on behalf of the device, often limiting performance due to shared resources. Through bus mastering, extends its capabilities by permitting peripherals—such as hard disk drives or network interfaces—to generate memory addresses and perform reads/writes directly, minimizing CPU cycles compared to programmed I/O (where the CPU handles each byte) or interrupt-driven methods (where the CPU responds to each data event). This autonomy reduces CPU utilization by up to 30% in disk transfer scenarios, as the processor can execute other instructions while the device manages the bus. Bus mastering thus transforms from a CPU-assisted process into a device-driven one, enhancing throughput for bandwidth-intensive tasks. Conceptually, the process under bus mastering follows a sequential flow: the peripheral detects data to transfer and asserts a bus request to the arbiter; upon granting control via bus arbitration, becomes the master and addresses directly for read/write operations; once the transfer completes, it releases the bus and signals the CPU via if needed. This cycle—request, acquisition, access, and release—ensures efficient resource sharing without halting the CPU entirely.

History

Early Developments

The concept of bus mastering originated in the mainframe and eras of the and , where specialized I/O controllers managed transfers independently of the central processor to improve system efficiency. In IBM's System/360 family, introduced in 1964, I/O channels acted as dedicated processors for handling operations, performing transfers to/from main memory after CPU initiation, providing early (DMA) capabilities without constant CPU involvement during the transfer. These channels, such as byte-multiplexer and selector types, allowed multiple devices to share access while the CPU focused on , marking a shift from CPU-bound I/O in prior systems. The introduction of direct memory access () controllers further advanced these ideas by enabling peripheral devices to seize bus control for transfers, a direct precursor to full bus mastering. The 8257 DMA controller, released in 1976 as part of the 8085 microprocessor ecosystem, provided programmable channels for high-speed I/O-to-memory operations, allowing third-party devices to temporarily master the system bus and bypass CPU involvement. This innovation addressed bottlenecks in early designs, where CPU-mediated transfers limited performance for tasks like disk or tape operations. Bus mastering thus built on DMA to extend control beyond dedicated controllers to general peripherals. Early bus architectures in the began incorporating explicit support for device-initiated transfers, enabling limited bus mastering in systems. The , developed in 1974 for the , included master/slave arbitration signals that permitted DMA-capable cards to take control for data movement, fostering expandability in hobbyist and small-scale computing. Similarly, Intel's , introduced in 1976, standardized a parallel interface with bus arbitration logic, allowing multiple masters—including DMA controllers and coprocessors—to request and hold bus ownership for efficient inter-board communication in industrial and OEM applications. A key milestone occurred with the transition from CPU-dominated control in 8-bit microcomputer systems to shared bus management in emerging 16-bit designs, enabling more complex arbitration and higher throughput. While 8-bit buses like the original S-100 prioritized simple CPU-centric access, 16-bit extensions and architectures such as Multibus supported wider data paths and concurrent masters, laying groundwork for scalable I/O in professional computing. This evolution reflected growing demands for multitasking and peripheral autonomy in the late 1970s.

Adoption in Personal Computers

The adoption of bus mastering in personal computers began with the introduction of the (ISA) bus alongside the PC in 1981, though initial implementations were limited to basic (DMA) capabilities managed by the system's DMA controller. Enhanced support for bus mastering peripherals emerged with the PC/AT in 1984, allowing compatible adapters to request and gain control of the bus for independent data transfers, thereby offloading the CPU during I/O operations. This feature proved essential for early high-performance peripherals, such as host adapters, which required efficient handling of large data volumes without constant CPU intervention. In 1987, IBM introduced the PS/2 line with (MCA), which provided true bus mastering through hardware-mediated arbitration, preemption, and fairness algorithms, supporting multiple masters and subsystem control blocks for enhanced efficiency. However, its proprietary design and licensing fees limited widespread adoption beyond systems. The primary drivers for bus mastering's integration into PCs during the late 1980s were the growing demands for high-speed in emerging applications like and local area networking. For instance, the (EISA) bus, introduced in 1988 by a of seven PC manufacturers, extended ISA to 32 bits and improved bus mastering to support up to 4 GB of memory addressing, facilitating faster peripherals such as controllers for hard drives and tape backups. Early Ethernet adapters, like National Semiconductor's SONIC-based EISA bus master cards, leveraged this capability to perform transfers directly to system memory, enabling reliable network performance in multitasking environments without overburdening the processor. These advancements addressed bottlenecks in data-intensive tasks, such as over networks or audio/video buffering, which were becoming prevalent with the rise of graphical user interfaces. In the 1990s, the Peripheral Component Interconnect (PCI) bus, specified by in June 1992, marked a pivotal shift by standardizing bus mastering across a plug-and-play that supported multiple masters with centralized . This enabled widespread adoption in consumer PCs, particularly with the proliferation of and Warp, which included native support for PCI bus-mastering drivers to manage device initialization and . By the mid-1990s, operating systems like Windows provided optimized drivers for bus-mastering IDE controllers, such as the Triones drivers for 's PIIX chipsets released in 1995, significantly reducing CPU utilization during disk I/O in multitasking scenarios and improving overall system responsiveness.

Technical Operation

Bus Arbitration

Bus arbitration is the process by which multiple potential bus masters compete for control of a shared , ensuring that only one device can initiate transfers at a time to avoid conflicts and maintain orderly operation in bus mastering environments. This mechanism is essential for enabling (DMA) controllers or other peripherals to take over from the CPU without system crashes or . In bus mastering, the arbitration resolves requests from devices seeking to become the active master, determining which one gains temporary control based on predefined rules. There are two primary types of bus arbitration: centralized and distributed. In centralized arbitration, a single dedicated arbiter—such as the CPU or a separate controller—manages all requests and grants bus access, evaluating competing claims and assigning through a unified decision process. This approach simplifies hardware design but can introduce bottlenecks if the arbiter becomes overwhelmed. Examples include daisy-chaining, where devices are linked in a fashion to propagate grant signals, with fixed based on position in the chain. In contrast, distributed arbitration allows all connected devices to participate directly in the resolution, using schemes such as self-selection where each device places a ID on shared arbitration lines to determine without a central authority. Distributed methods promote scalability in multi-master systems but require more complex wiring and signaling to ensure . Key signals facilitate the arbitration handshake between requesters and the bus controller. Devices typically assert a Request signal (often denoted as REQ# in active-low protocols) to indicate their need for bus control, prompting the arbiter to evaluate and respond with a Grant signal (GNT#) to the selected master. In early microprocessor-based systems, such as those using the , the external master asserts the HOLD signal to request control, to which the responds with HLDA (Hold Acknowledge) at the end of its current bus cycle, tri-stating its outputs to relinquish control and allow the new master to drive the lines. These signals ensure a clean transition of bus ownership, with the granting process often including a brief idle period to stabilize the bus. Priority resolution schemes determine which requester wins during contention, balancing efficiency and equity. Fixed priority assigns static ranks to devices—e.g., the CPU or critical controllers always outrank peripherals—to minimize latency for high-importance tasks, though this risks bus for lower-priority devices if higher ones dominate. Rotating , also known as , cyclically shifts the after each grant, ensuring fairer access and preventing indefinite delays for any single requester. Fair extends this by incorporating time-based or request-history mechanisms to explicitly avoid , guaranteeing that every active device eventually gains access within a bounded timeframe. Latency in bus arbitration refers to the delay from a device's request assertion to receiving the grant, influenced by factors like the number of contenders and the scheme employed. This period typically includes evaluation time by the arbiter plus turnaround cycles—idle clock periods needed for and lines to settle after the previous master releases them, often 1-2 cycles in synchronous systems to prevent signal contention. High contention can extend this latency to several bus cycles, impacting overall system throughput, though efficient schemes like independent request-grant lines in centralized setups can reduce it to under one cycle in low-load scenarios. Once granted, the master proceeds to initiate data transfers, but arbitration overhead remains a key factor in bus mastering performance.

Data Transfer Process

Once bus control has been granted through the arbitration process, a bus master initiates a data transfer cycle by entering the address phase, during which it drives the target memory or I/O address onto the address bus and asserts control signals specifying the operation type, such as memory read, memory write, I/O read, or I/O write. The data phase immediately follows, enabling the actual transfer of information between the master and the target device. In a read operation, the bus master has already provided the address and now waits for the target to decode it and drive the requested data onto the data bus; the master samples this data upon receiving an acknowledgment from the target. For a write operation, the bus master supplies both the address (from the prior phase) and the write data onto the data bus, with the target latching the information once ready; some bus architectures incorporate parity bits or error detection mechanisms during this phase to verify data integrity. To enhance efficiency, especially for sequential accesses, bus masters support burst modes consisting of a single address phase followed by multiple consecutive data phases, transferring several words without reasserting the address each time. In -based systems, these burst transfers incorporate coherency handling, where the bus master or supporting hardware monitors or invalidates relevant lines to prevent stale data inconsistencies across multiple processors or devices. Each phase concludes with termination signals from the , such as a ready indicator (e.g., RDY# in legacy buses or TRDY# in more advanced designs) to signal successful completion of the data transfer, or a stop signal for error conditions like invalid addresses, ensuring the operation remains atomic and the bus is released promptly by the .

Implementations

In ISA and EISA

Bus mastering in the () bus was implemented optionally through () channels managed by controllers like the , which provided four channels per controller (channels 0-3 for 8-bit transfers and 5-7 for 16-bit, with channel 4 used for cascading between controllers). These channels allowed peripherals to request bus control, but the process relied on third-party DMA mode, where the CPU had to actively relinquish the bus by responding to a HOLD request with HLDA (hold acknowledge), introducing significant involvement and potential delays from the processor. The bus operated at a maximum clock speed of 8 MHz, limiting transfer rates to approximately 5 MB/s theoretically for 16-bit operations, though real-world performance was often lower due to overhead. ISA bus mastering suffered from compatibility issues, particularly in systems where the bus ran in half-speed mode—such as 4 MHz when the system clock was 8 MHz—to accommodate slower peripherals or 8-bit slots, creating bottlenecks that reduced overall throughput and made it unsuitable for high-performance data transfers. This mode was common to ensure reliability with legacy hardware, but it exacerbated during master operations, as the CPU's involvement in could lead to contention and inefficient resource sharing. The (EISA) addressed many limitations by introducing native 32-bit addressing, enabling access to up to 4 GB of memory through signals like A[31:0] and byte enables BE#[3:0]. EISA supported dedicated modes with request (MREQx#) and grant (MAKx#) lines for , managed by a Central Arbitration Control block, allowing up to 15 concurrent bus masters including the CPU, DMA controllers, and peripherals. This setup used a rotational priority scheme, configurable via settings on the or expansion cards to assign levels and ensure fair access, with preemption possible within 64 bus clock cycles (approximately 8 µs at standard speeds). In practice, EISA bus mastering enabled peripherals like tape drives to operate as intelligent masters, performing high-speed burst-mode transfers directly to without constant CPU intervention, improving efficiency for devices in personal computers. Slaves in EISA indicated their data path width via signals such as EX16# or EX32#, ensuring compatibility with cards while leveraging the enhanced architecture for 32-bit operations.

In PCI and PCIe

Bus mastering in , introduced by the PCI Special Interest Group in 1992, enables peripheral devices to take control of the bus to perform (DMA) transfers, improving system efficiency by offloading the CPU. The PCI specification defines a parallel bus architecture operating at 33 MHz with 32-bit or optional 64-bit data widths, supporting burst transfers of up to 256 bytes to minimize during high-volume data movements. is handled centrally by the host bridge using four dedicated request (REQ#) and grant (GNT#) signal pairs per bus segment, allowing up to four concurrent masters; this point-to-point signaling ensures fair access through priority or algorithms, with the bus master asserting FRAME# to initiate transactions after receiving a grant. PCIe, evolving from PCI since its initial specification in 2003, replaces the parallel shared bus with high-speed serial point-to-point links, scaling from x1 to x32 lanes and supporting data rates up to 64 GT/s per lane in PCIe 6.0 (specification finalized in 2022, with initial products launching as of 2025) for greater bandwidth. Bus mastering in PCIe occurs through the transaction layer, where devices generate transaction layer packets (TLPs) to request memory reads/writes, I/O operations, or configuration accesses, encapsulated with headers defining the transaction type, address, and length (up to 4 KB per TLP). Virtual channels enhance quality of service by providing multiple logical paths over the physical link, enabling prioritized traffic flow via credit-based flow control to prevent congestion during mastering operations. Key optimizations in PCIe include posted writes, where write TLPs are fire-and-forget without requiring acknowledgments, reducing overhead for bulk data transfers, and the ability to disable interrupts during active mastering to avoid CPU intervention. PCIe also integrates support for Input-Output Units (IOMMUs), which translate virtual addresses to physical ones, enhancing security by isolating DMA accesses from unauthorized memory regions. For instance, graphics cards employ PCIe bus mastering for GPU DMA to stream textures and frame buffers directly to system memory at rates exceeding 16 GB/s on x16 links, while NVMe SSD controllers use it for high-bandwidth command queuing and data transfers, achieving sequential read speeds over 7 GB/s in modern implementations.

In Other Architectures

Bus mastering extends beyond PC-centric architectures to various , industrial, and specialized systems, where it enables efficient data movement in resource-constrained or multi-device environments. In the ARM Advanced Microcontroller Bus Architecture (AMBA), particularly the AXI protocol, bus mastering is implemented through master interfaces that allow (SoC) components, such as GPUs and controllers, to initiate transactions independently of the central processor. The AXI4 specification supports multiple concurrent masters with features like out-of-order transaction completion and burst transfers, optimizing high-bandwidth interconnects in and processors. In storage and peripheral interfaces like USB and , host controllers employ bus mastering to perform (DMA) operations, offloading the CPU during data transfers. For , the (AHCI) enables the controller to act as a bus master on the while managing multiple devices; notably, AHCI supports port multipliers that extend connectivity to additional drives, allowing the controller to arbitrate and master transfers across the multiplier topology without CPU intervention. Similarly, USB host controllers in systems use bus mastering for DMA to handle bulk data from devices like sensors or , ensuring low-latency communication in applications. Industrial and embedded architectures from earlier decades also incorporated bus mastering for robust multi-device coordination. The , introduced in the , features a multi-master design with centralized using daisy-chain signals to grant bus ownership, supporting systems in and scientific where predictable is critical. In Apple's Macintosh systems, the employed cooperative bus mastering, where masters request control via synchronous and can extend tenure through bus locks, allowing shared access to multiport memories without aggressive contention, which facilitated expansion cards in compact computing environments. Modern variants of bus mastering appear in () devices, where extensions to low-speed buses like I2C and integrate capabilities for efficient sensor data handling. In microcontrollers such as those from families, I2C and peripherals can trigger transfers, enabling peripherals to master the for autonomous data movement from sensors to memory, reducing CPU overhead in battery-powered nodes. This approach supports multi-master arbitration in I2C for during concurrent sensor reads, enhancing scalability in distributed networks.

Advantages and Challenges

Benefits

Bus mastering provides significant CPU offloading by allowing peripheral devices to initiate and manage data transfers directly to and from system memory, thereby freeing the from repetitive I/O handling and enabling it to focus on computational tasks. This offloading leads to overall system throughput improvements in I/O-intensive workloads, such as disk transfers, compared to programmed I/O methods. Direct memory access enabled by bus mastering minimizes through reduced overhead and streamlined paths, as devices bypass the CPU for transfers, resulting in smoother applications like uninterrupted video streaming or audio processing without frame drops or buffering delays. For instance, in or systems, bus mastering ensures timely delivery to buffers, preventing bottlenecks in high-resolution playback scenarios. In multi-device environments, bus mastering enhances by supporting concurrent I/O operations across multiple peripherals, which is particularly valuable in architectures handling diverse workloads like database queries and servicing simultaneously. This concurrency allows systems to maintain as device counts increase, without proportionally burdening the CPU, thereby supporting robust multitasking in settings. Bus mastering optimizes efficiency through features like burst modes, where devices can large data blocks in rapid succession to saturate available bus capacity, which is essential for high-throughput networking applications such as 1 Gbps Ethernet adapters that stream packets directly into to achieve full speeds. In these scenarios, the ensures maximal utilization of bus resources, doubling effective rates on 64-bit buses compared to narrower architectures, without unnecessary CPU intervention.

Limitations and Issues

Bus mastering introduces significant design complexity due to the need for sophisticated to manage among multiple masters, which can lead to increased costs and higher power consumption in system implementations. In multi-master environments, this complexity heightens the risk of deadlocks, where devices are unable to proceed because each is waiting for resources held by another, necessitating advanced avoidance mechanisms such as rearbitration protocols in bus architectures like CoreConnect. Compatibility challenges arise particularly in legacy systems, where bus mastering devices may conflict with (IRQ) sharing limitations; for instance, ISA bus devices generally require unique IRQs and cannot share them if simultaneous use is possible, leading to resource contention and installation difficulties. Additionally, bus mastering enables (DMA), which poses security risks by allowing peripherals to read or write arbitrary memory locations without CPU mediation, potentially enabling data theft or injection; these vulnerabilities are mitigated in modern systems through Input-Output Memory Management Units (IOMMUs) that enforce address translation and access controls. In low-load scenarios involving small data transfers, the overhead of bus mastering can outweigh its benefits, as the arbitration latency—often spanning several clock cycles to grant bus control—results in poorer performance compared to simpler CPU-polling methods, which avoid such delays for frequent, minor operations. Error handling in bus mastering systems presents further issues, including the of bus errors like master or aborts, which require software to detect, log, and recover from faults such as invalid addresses or timeouts. Coherency problems also emerge in cached environments, where DMA transfers bypass caches, necessitating explicit software actions like or flushing to ensure data consistency across bus masters and processors, thereby adding to programming complexity and potential for errors if not properly implemented.

References

  1. [1]
    Bus Master - an overview | ScienceDirect Topics
    Bus masters are devices on a PCI bus that are allowed to take control of that bus. This is done by a component named a bus arbiter, which usually integrated ...
  2. [2]
    bus master - Computer Dictionary of Information Technology
    In a simple architecture only the (single) CPU can be bus master but this means that all communications between ("slave") I/O devices must involve the CPU.
  3. [3]
    What is Bus Mastering? - Webopedia
    May 24, 2021 · Refers to a feature supported by some bus architectures that enables a controller connected to the bus to communicate directly with other ...Missing: architecture | Show results with:architecture
  4. [4]
    Bus Evolution - Ardent Tool of Capitalism
    True bus master capabilities were finally achieved with a hardware mediated arbitration process, method of preemption, and a fairness algorithm for equitable ...
  5. [5]
    Microcomputer Bus history and background - Retrotechnology
    May 24, 2023 · Bus architectures (as common signals, etc.) began to emerge around the 1970's. Some designers created large boards with many, simple logic chips ...
  6. [6]
    CDA-4101 Lecture 10 Notes
    a bus transaction is a request from one device to another · the requesting device acts as the controlling device and is called the master · the request acts as ...
  7. [7]
    [PDF] ECE 152 - Computer Architecture - Duke People
    (4) Bus Arbitration. • Bus master: component that can initiate a bus request. • Bus typically has several masters, including processor. • I/O devices can also ...
  8. [8]
    Direct Memory Access and Bus Mastering - Linux Device Drivers ...
    To exploit the DMA capabilities of its hardware, the device driver needs to be able to correctly set up the DMA transfer and synchronize with the hardware.
  9. [9]
    Direct Memory Access (DMA): Working, Principles, and Benefits
    Mar 14, 2024 · In this mode, the DMA controller acts as a bus master and communicates directly with memory or other devices without involving the CPU. The bus ...
  10. [10]
    Types of Device DMA
    Under first-party DMA, the device drives its own DMA bus cycles using a channel from the system's DMA engine. The ddi_dmae_1stparty(9F) function is used to ...
  11. [11]
    Direct Memory Access (DMA) Modes and Bus Mastering DMA
    Bus-mastering DMA allows for the efficient transfer of data to and from the hard disk and system memory. Bus mastering DMA keeps CPU utilization low, which is ...
  12. [12]
    Bus Mastering 2025
    The system bus forms the central communication highway of a computer, linking the CPU, memory, and peripheral devices through a shared pathway. Every ...
  13. [13]
    [PDF] DMA Fundamentals on Various PC Platforms
    In bus master mode, the DMA controller acquires the system bus (address, data, and control lines) from the CPU to perform the DMA transfers. Because the CPU ...Missing: mastering documentation
  14. [14]
    The IBM System/360
    The IBM System/360, introduced in 1964, ushered in a new era of compatibility in which computers were no longer thought of as collections of individual ...
  15. [15]
    Time-sharing in the IBM system/360 - ACM Digital Library
    Channel controller requests for storage cycles are given priority over processor re- quests. No longer is the CPU tied to a particular memory in the classical ...
  16. [16]
    Simulating the IBM 360/50 mainframe from its microcode
    Jan 25, 2022 · You can think of these as DMA (direct memory access) paths for I/O. The multiplexor channel communicates over an 8-bit bus through the mover, ...
  17. [17]
    [PDF] 8237A HIGH PERFORMANCE PROGRAMMABLE DMA ... - PDOS-MIT
    The 8237A Multimode Direct Memory Access (DMA) Controller is a peripheral interface circuit for microproc- essor systems. It is designed to improve system ...Missing: history | Show results with:history
  18. [18]
    IBM PC/XT chipset - The Retro Web
    Apr 29, 2019 · The chips used in the PC/XT were introduced by Intel at various different times between 1975 and 1981. In addition there are various different ...
  19. [19]
    S-100 and IEEE-696 Bus List
    Nov 2, 2023 · The book also covers variations to the 1970's "S-100 bus" signaling ... Other features included master/slave arbitration and bus mastering ...
  20. [20]
    [PDF] Mu tibus Design Guidebook - Bitsavers.org
    This guidebook covers Multibus design, including structures, architectures, and applications, and is related to microcomputer buses and computer architecture.
  21. [21]
    [PDF] A Bus TOUR - Ardent Tool of Capitalism
    microcomputer buses (e.g., Multibus I) and the personal com- puter buses (e.g., Apple II). An 8-bit bus at first, the S-100 was extended to 16 bits. An. IEEE ...
  22. [22]
    ISA Bus Technical Summary - WearCam.org
    DMA Requests are used by ISA boards to request service from the system DMA controller or to request ownership of the bus as a bus master device. These signals ...
  23. [23]
    The IBM PC, 41 Years Ago | OS/2 Museum
    Aug 10, 2021 · Intel Multibus specification was from around 1976 if I googled correctly. There were third party cards commercially available in 1979, as ...
  24. [24]
    Bus Slots and I/O Cards - Stars
    You don't need an electrical engineering degree to plug a card into a PC. The MCA bus also supports bus mastering. Through implementing bus mastering, the MCA ...<|separator|>
  25. [25]
    [PDF] Adaptec product list - Bitsavers.org
    AHA-1744: EISA-to-Fast SCSI Bus Master. Host Adapter/with Differential Option ... 1988 - First Micro Channel-to-SCSI host adapter. 1989 - First single ...
  26. [26]
    [PDF] SONIC EISA Bus Master Ethernet Adapter - Bitsavers.org
    This document will first give a hardware functional descrip- tion of the card, followed by an overview of EISA covering topics such as system configuration, I/O ...Missing: early | Show results with:early
  27. [27]
    [PDF] PCI Local Bus Specification
    Dec 18, 1998 · The PCI Local Bus Specification, revision 2.2, includes an introduction, overview, features, and benefits of the PCI Local Bus.
  28. [28]
    Children of the Bus Wars - The OS/2 Museum
    Mar 23, 2016 · Around the same time, Intel worked on PCI, a bus which successfully learned from past mistakes. In late 1993, the first PCI systems and ...
  29. [29]
    Triones DMA "bus mastering" IDE drivers - VOGONS
    Jul 27, 2011 · The Triones drivers is for the Intel PIIX, PIIX3, and PIIX4 southbridge controllers. The Triones drivers were the first in the market back in 1995.Missing: 1990s | Show results with:1990s
  30. [30]
    BUS Arbitration in Computer Organization - GeeksforGeeks
    Apr 9, 2024 · Bus arbitration is the process of resolving conflicts that arise when multiple devices attempt to access the bus at the same time.Missing: starvation | Show results with:starvation
  31. [31]
    What is bus arbitration in computer organization? - Tutorials Point
    Jul 30, 2019 · Centralised Arbitration. Distributed Arbitration. Only single bus arbiter performs the required arbitration and it can be either a processor or ...
  32. [32]
    [PDF] Lesson-9: Bus Arbitration Mechanisms
    Centralized bus arbitration process. ○. Bus control passes from one bus master to the next one, then to the next and so on. ○. Bus control passes from ...
  33. [33]
    What is bus arbitration? Explain any two techniques of bus ... - Ques10
    Jul 11, 2016 · There are two approaches to bus arbitration: Centralized and distributed. 1. Centralized Arbitration There are three different arbitration schemes.
  34. [34]
    Bus Arbitration: Concept, Methods, and Importance in Computer ...
    Sep 27, 2025 · Definition (decides bus master). · Types (centralized vs distributed). · Methods (daisy chaining, polling, independent lines, distributed logic).
  35. [35]
    [PDF] System Buses
    ∗ Fair policies. » A fair policy will not allow starvation. – Rotating priority policies are fair. » Fair policies need not use priorities. » Fairness can be ...
  36. [36]
    A close look at the 8086 processor's bus hold circuitry
    Aug 6, 2023 · At the end of the current bus cycle, the 8086 acknowledges the hold request by pulling HLDA high. The 8086 also puts its bus output pins into " ...Missing: mastering GNT#
  37. [37]
    [PDF] 18-447 Lecture 13: Bus, Protocol, and I/O - Carnegie Mellon University
    bus-turnaround cycle. 5. target drives data. 6. initiator x signals final ... • Request/Grant latency depends on. – degree of bus contention. – arbitration ...
  38. [38]
    Summary of the AHB transfer mechanism - Arm Developer
    A granted bus master starts an AHB transfer by driving the address and control signals. These signals provide the following information about the transfer.Missing: phases termination
  39. [39]
    [PDF] Cache Coherency - Washington
    • entire coherency operation is atomic wrt other processors. • keep-the-bus protocol: • master holds the bus until the entire operation has completed. • no ...
  40. [40]
    [PDF] Interfacing bus, Protocols, ISA bus etc. - Cloudfront.net
    It starts with a bus request to the CPU and after it is granted it takes over the address/data and control bus to initiate the data transfer. After the data ...Missing: termination | Show results with:termination
  41. [41]
    [PDF] EISA System Architecture - ece.ufrgs
    EISA Bus Master Handshake Lines ... The EISA specification defines a new set of bus cycle definition signal lines. The current EISA bus master uses them to ...
  42. [42]
    [PDF] PCI Express® Basics & Background
    Jun 23, 2015 · Request are translated to one of four transaction types by the Transaction Layer: 1. Memory Read or Memory Write. Used to transfer data from or ...
  43. [43]
    [PDF] Using IOMMU for DMA Protection in UEFI Firmware - Intel
    If PCI BME is used, bus mastering must be disabled at the root bridge for the entire duration critical early processing until the platform security ...
  44. [44]
    VMEbus Arbitration - VITA
    Sep 19, 1999 · The VMEbus is a shared resource, each master must be granted the bus so that no two masters can drive the bus at the same time. This is called bus arbitration.
  45. [45]
    [PDF] I²C Master Mode - Microchip Technology
    The I2C interface allows for a multi-master bus, meaning that there can be several master devices present on one bus. A master can select a slave device by ...
  46. [46]
    [PDF] Design and Evaluation of FPGA-based Gigabit Ethernet/PCI ...
    bus. The bus is required to provide enough bandwidth at 1Gbps rate for receive and transmit transfers in DMA channels and Gigabit Ethernet interface.
  47. [47]
    [PDF] The CoreConnect™ Bus Architecture
    Deadlock avoidance through slave forced PLB rearbitration. •. Master driven atomic operations through a bus arbitration locking mechanism. •. Byte-enable ...
  48. [48]
    1.3.1.1. ISA interrupts versus PCI interrupts - PC Hardware in a ...
    For ISA slots and devices, the rule is simple: two devices cannot share an IRQ if there is any possibility that those two devices may be used simultaneously. In ...
  49. [49]
    [PDF] Thunderbolt: Exposure and Mitigation - cs.wisc.edu
    In this study, we explored a number of vulnerabilities ex- posed by the presence of a Thunderbolt interface within a system, including well-known attacks such ...Missing: risks | Show results with:risks
  50. [50]
    [PDF] Bus Arbitration & Mastery Network type busses I/O architectur
    All communication between the bus and the device (send data, receive data, status and control) is performed by reading from, and writing to these registers. I/O ...Missing: concurrent | Show results with:concurrent
  51. [51]
    [PDF] Using the i.MXRT L1 Cache - NXP Semiconductors
    Then eDMA transfers the data to SAI FIFO is incorrect, and the data coherency problem occurs. To avoid such data coherency issue, here're some solutions: 1.<|control11|><|separator|>