Fact-checked by Grok 2 weeks ago

Direct-access storage device

A direct-access storage device (DASD) is a type of secondary storage that enables to be retrieved or modified by specifying its exact physical or , allowing without sequentially scanning prior , in contrast to sequential-access media such as magnetic tapes. This capability makes DASDs essential for efficient in computing systems, particularly in environments requiring quick access to large volumes of information. The concept of DASD originated with IBM's development of the Random Access Method of Accounting and Control (RAMAC) system in the early 1950s, culminating in the introduction of the IBM 350 Disk Storage Unit in 1956 as the world's first commercial hard disk drive. This 24-inch diameter disk pack, capable of storing 5 million characters across 50 platters, marked a significant advancement over punch cards and tapes by providing direct access at speeds up to 8,800 characters per second. With the launch of the IBM System/360 mainframe family in 1964, DASD technology became standardized, integrating rotating magnetic disk drives as core components for online transaction processing and data management in enterprise computing. Over decades, DASD evolved from early head-per-track designs to multi-platter Winchester technology, with notable models including the 3370 (introduced in 1972, offering 571 megabytes per unit) and the 3380 (1980, providing up to 2.52 billion characters of storage with reduced energy consumption). Later advancements, such as the 3390 in 1989, increased capacities to several gigabytes while improving reliability through error-correcting codes and faster access times. Today, the term DASD persists primarily in (mainframe) contexts, encompassing both traditional rotating disk drives and modern solid-state drives, which serve as fixed or removable volumes in logical volume managers for operating systems like and AIX. These devices typically use block-oriented access methods, with block sizes ranging from 512 to 22,000 bytes depending on the format (e.g., Count Key Data or Fixed Block Architecture).

Overview

Definition and Characteristics

A direct-access storage device (DASD) is a type of secondary storage that enables to by specifying a for each physical , allowing retrieval without the need to preceding sequentially. This contrasts briefly with sequential-access devices like magnetic tapes, which require reading from the beginning to reach a specific . The term DASD was coined by in 1964 to describe storage compatible with its System/360 mainframe architecture, initially referring to technologies such as hard disk drives, magnetic drums, and data cells. Key characteristics of DASD include non-volatility, meaning data persists without , and block-oriented , where is stored and accessed in fixed-size blocks for efficient handling. These devices support high-speed through mechanical or electronic mechanisms involving seek time—the duration to position the read/write head—and rotational latency for spinning , enabling rapid data location compared to linear . Over time, DASD capacities have evolved dramatically, starting from several megabytes per unit in the to terabytes in contemporary implementations, reflecting advances in and materials. Representative examples of DASD encompass hard disk drives (HDDs) using magnetic platters, solid-state drives (SSDs) based on , magnetic drum memory as an early form, and even optical discs for read-intensive applications. By providing prerequisite random-access capabilities, DASD has been foundational in enabling multitasking operating systems and database management systems, which rely on quick, addressable data retrieval to support concurrent operations and indexed queries.

Comparison with Sequential-access Devices

Direct-access storage devices (DASD) fundamentally differ from sequential-access devices in their data retrieval mechanisms, enabling efficient random access that transformed computing applications. DASD utilize absolute addressing, such as cylinder-head-sector (CHS) or logical block addressing (LBA), to position the read/write head directly at the target location on the medium. This approach yields a constant average access time, approximating O(1) time complexity for retrieving any record, independent of its position relative to previously accessed data. In contrast, sequential-access devices like magnetic tapes require linear traversal from the current head position to the desired record, resulting in O(n) time complexity for non-sequential operations, where n scales with the physical distance on the medium. Performance characteristics underscore these access pattern disparities. For DASD implemented as hard disk drives (HDDs), typical seek times—the duration to move the head to the correct track—range from 5 to 10 milliseconds, while rotational latency, the average wait for the target sector to rotate under the head, is 4.2 milliseconds (maximum 8.3 ms) on 7200 RPM drives. Sequential devices, however, prioritize linear efficiency: modern (LTO) generations deliver high sustained throughput of 300 to 400 MB/s for sequential reads and writes, but is severely limited, with average positioning times of 50 to 60 seconds due to the need for rewinding or fast-forwarding across potentially gigabytes of tape. These differences directly influence suitable use cases. DASD excel in environments demanding frequent random I/O, such as for query processing and operating file management, where direct record retrieval supports operations without scanning intervening . Sequential devices are better suited for ordered streams, including backups, archival , and audit logs, leveraging their high throughput for bulk transfers while tolerating slower repositioning. The shift to DASD enabled pivotal advancements in , particularly by supporting random I/O in interactive workloads like (OLTP), which replaced tape-driven with responsive, record-oriented systems capable of handling concurrent user requests efficiently.

History

Origins in IBM Systems

While the concept of direct-access storage devices predated the System/360 (as detailed in the ), the term "DASD" and its emerged as a critical component of 's System/360, announced on April 7, 1964, to address the fragmentation caused by IBM's prior incompatible product lines, such as the commercial and the scientific IBM 7090, by standardizing storage and architectures across a unified family of machines. This initiative aimed to enable seamless and compatibility, replacing disparate systems that had hindered and program portability in the early 1960s. Pivotal to this ecosystem was the 2311 Disk Storage Drive, introduced in 1964 alongside the System/360, which provided 7.25 megabytes of storage per removable disk pack spindle using six 14-inch disks and supported for efficient . Another early innovation, the 2321 Data Cell Drive, announced in 1964 and shipped starting in 1965, utilized removable cartridges containing 200 short strips to achieve up to 400 megabytes of capacity in a single unit, marking an ambitious attempt at high-density, cartridge-based mass storage. However, the 2321 faced persistent reliability challenges, including frequent tape jams in its complex retrieval mechanism—derisively called the "noodle picker"—leading to its withdrawal from marketing in January 1975. The term "direct-access storage device" originated in IBM's technical documentation for the System/360 era, with the acronym DASD first appearing in the March 1966 Data File Handbook, which described these devices as enabling random record retrieval on magnetically coated disks or strips, in contrast to sequential media like punched cards and magnetic tapes. This terminology underscored the need for random-access capabilities to support emerging multiprogramming environments in the System/360, where multiple programs could concurrently access shared data without the delays inherent in sequential storage methods such as tape reels or card decks.

Evolution and Modern Usage

In the , advanced DASD technology by transitioning from removable disk packs to non-removable, sealed assemblies, exemplified by the IBM 3350 introduced in 1976, which offered 317.5 MB per spindle in a fixed Head Disk Assembly (HDA) to enhance reliability and reduce maintenance. This shift addressed limitations of earlier , such as the IBM 3330, by eliminating pack handling and improving through sealed environments that minimized . By the late and into the , DASD usage in mainframe documentation proliferated, with the term becoming standard for high-capacity storage systems. During the and , DASD evolved toward array-based configurations, as seen with the 3990 Storage Control introduced in , which supported multiple disk drives with caching and improved data paths to handle growing I/O demands in environments. This era marked the peak of the DASD term in contexts, where it encompassed controllers managing arrays that incorporated early redundancy features akin to precursors, enabling scalable storage for business-critical applications. By the early , annual DASD capacity growth had moderated from 60% in the prior decade, reflecting maturation of the technology. Entering the 2000s, DASD integrated solid-state drives (SSDs) and , particularly in Systems, with the 2012 introduction of (generally available December 2012) providing a high-performance tier using flash cards to accelerate paging and I/O operations. Contemporary z Systems, including the announced in April 2025, support hybrid configurations combining HDDs and SSDs, allowing dynamic allocation for workloads requiring low latency, such as , with enterprise drives exceeding 30 TB per unit as of 2025. In , DASD now often refers to virtualized block storage, including AWS Elastic Block Store (EBS) volumes used in mainframe emulation environments like to provide DASD-like volumes for testing and development in hybrid clouds. The DASD term has declined outside IBM ecosystems, largely replaced by generic "disk storage" or "block storage" in broader computing, though it endures in for defining datasets and volumes. This evolution has underpinned infrastructures by delivering scalable, direct-access capabilities; modern enterprise drives routinely surpass 20 TB per unit, supporting exabyte-scale analytics.

Architectures and Data Formats

Count Key Data (CKD)

Count Key Data (CKD) is a variable-length record format for direct-access storage devices (DASD) developed by , serving as the foundational data organization method for mainframe storage since its introduction with the System/360 family in 1964. This format enables efficient to records on disk tracks, optimizing for the parallel channel architecture of early mainframe systems. CKD remains the standard for DASD volumes in OS/360, , and environments, where it supports core functions like volume table of contents (VTOC) management and direct access methods such as BDAM and BSAM. The structure of a CKD track begins with a home address, which identifies the track's physical location using the CCHH (cylinder-head-head) format, followed by a track descriptor record (Record 0) that provides metadata without a key or data field. Subsequent data records on the track each consist of three primary fields: the Count field, an optional Key field, and the Data field. The Count field is an 8-byte area containing the full CCHHR (cylinder-head-head-record) identifier for precise record addressing, along with the lengths of the Key and Data fields to enable hardware-level navigation and gap management between records. The Key field, if present, ranges from 1 to 255 bytes and holds an identifier (such as an account number or part identifier) used by applications for record selection. The Data field carries the variable-length user data, allowing records to vary in size up to the remaining track capacity. Addressing in CKD relies on the CCHHR scheme embedded in the Count field of each record, which specifies the cylinder, head (surface), and record number within the track, facilitating direct physical access optimized for mainframe I/O channels. For records exceeding the available space on a single track—such as large variable-length entries—CKD supports track overflow, where the Count field includes an alternate track address to continue the data on the next available track, ensuring continuity without fragmentation. This mechanism is particularly useful in handling oversized records in sequential or indexed processing. In usage, CKD serves as the default format for DASD in OS/360, MVS, and z/OS, introduced alongside System/360 to support legacy and modern mainframe workloads. It excels in managing indexed sequential files through the Indexed Sequential Access Method (ISAM), where the Key field acts as a hardware-accelerated index for rapid direct retrieval and updates in prime, index, and overflow areas. ISAM datasets on CKD DASD organize records sequentially with track, cylinder, and master indexes, enabling efficient QISAM sequential scans or BISAM direct access via macros like GET and PUT. The advantages of CKD lie in its flexibility for variable-length data, which minimizes storage waste by accommodating records of differing sizes without fixed padding, making it ideal for diverse mainframe applications like partitioned data sets and EXCP-level I/O. However, a key limitation arises in contemporary environments, where underlying disk drives employ fixed-block architecture (FBA); CKD must be emulated via microcode or software in controllers, introducing complexity in mapping variable records to fixed blocks and potential overhead in and space utilization. This emulation ensures but contrasts with the simpler native access of FBA systems.

Fixed Block Architecture (FBA)

Fixed Block Architecture (FBA) represents a data format for direct-access storage devices that organizes storage into fixed-length blocks, serving as a simpler alternative to the variable-length Count Key Data (CKD) predecessor format. Introduced in 1979 with the 3310 and 3370 drives, FBA was designed to streamline data access by eliminating the need for key fields and variable record handling, thereby reducing complexity in certain mainframe environments. Subsequent devices, including the 3375 and 3380 models, also supported FBA formatting, expanding its applicability while maintaining compatibility through software emulation of CKD features where required. In FBA, each is divided into a fixed number of uniform , typically ranging from 512 to 4096 bytes in size, with no dedicated per to simplify the and minimize overhead. Data addressing relies on sequential relative numbers (RBNs) starting from block 0, rather than record-specific identifiers, which enables efficient linear access and aligns well with block-oriented protocols. This block-based numbering reduces seek and transfer overhead, particularly for drives resembling architectures, and supports features like an indexed Volume Table of Contents (VTOC) formatted similarly to VSAM relative datasets for space management. FBA gained native support in operating systems such as VM/370, DOS/VSE, and their successors and z/VSE, where it facilitated direct use for minidisks and volumes without extensive reformatting. In contrast, and environments do not support FBA natively and require software to interface with such devices, limiting its adoption in larger-scale systems. The architecture's advantages include easier integration with open systems standards due to its block-oriented design, making it suitable for smaller mainframes and hybrid environments where compatibility with non-IBM peripherals is beneficial.

Fibre Channel Protocol (FCP) and Other Protocols

The (FCP) enables the attachment of () storage devices to Systems mainframes using Fibre Connection (FICON) channels, allowing these devices to function as direct-access storage devices (DASD). FCP, which transports commands over networks, was introduced in the late 1990s alongside FICON channels in 1998, providing a high-speed serial interface that superseded earlier parallel channel technologies. This protocol supports access to industry-standard disks, including those in storage arrays, through single- or multi-channel switches, facilitating integration of commodity hardware into mainframe environments. In IBM z Systems, FCP receives full support in operating systems such as and z/VSE, where disks can be directly accessed or as traditional DASD volumes for guest and system use. For example, employs an FBA emulation layer to present FCP-attached logical units (LUNs) as Fixed Block Architecture (FBA) devices, compatible with a range of applications. In contrast, provides only partial support for FCP, primarily through emulation to mimic Count Key Data (CKD) volumes, as native handling is limited and not optimized for its traditional DASD workflows. This emulation allows to use FCP devices but imposes constraints, such as reduced performance for certain access patterns and restrictions on device sharing without N_Port ID Virtualization (NPIV). Technically, FCP devices on z Systems are addressed using an 8-byte MBBCCHHR format, which specifies the model, block, , and record for emulated tracks, enabling precise mapping of SCSI LUNs to mainframe device numbers (e.g., 0.0.xxxx subchannels). This addressing scheme supports the use of commodity drives as DASD by assigning them unique worldwide port names (WWPNs) and LUNs via and masking in the (). FCP's integration with RAID-configured storage, such as the DS8000 series, further enhances reliability by presenting redundant array volumes over FICON/FCP links, with ports configurable for either or FICON protocols. Preceding FCP, the Enterprise Systems Connection (ESCON) protocol, introduced in 1990, served as a fiber-optic interface for connecting mainframe channels to DASD and devices, offering up to 17 MB/s bandwidth over distances of 9 km with switches. ESCON acted as a bridge to modern Fibre Channel-based protocols like FICON and FCP but was phased out in favor of higher-speed alternatives. For non-mainframe DASD environments, contemporary protocols such as and —including NVMe over PCIe—provide direct attachments for hard disk drives and solid-state drives, emphasizing plug-and-play connectivity in distributed systems. Despite its advantages, FCP in environments is often limited to cost-effective pools where is applied, as the operating system prioritizes native ECKD (Extended CKD) for performance-critical workloads, avoiding direct overhead. This preference stems from 's legacy optimization for mainframe-specific DASD geometries, making FCP more suitable for auxiliary or /z/VM-based tiers rather than primary production volumes.

Access Methods

In DOS/360 and Successors

DOS/360, introduced by in 1966, was designed for smaller System/360 configurations with limited memory and processing capabilities, serving as a disk-based operating system for batch-oriented environments on direct-access storage devices (DASD). It supported single-tasking operations, with multitasking introduced in later variants like DOS/VS in the 1970s, emphasizing efficient I/O for applications running in constrained virtual storage environments up to 256 . Successors such as DOS/VS and DOS/VSE extended these capabilities to System/370 hardware, incorporating virtual storage management while retaining core DASD handling for smaller-scale systems compared to OS/360 equivalents. Access to DASD in DOS/360 relied on basic macros for sequential and direct operations, with BSAM providing unbuffered sequential read/write access to records on DASD volumes, suitable for simple file processing without advanced queuing. QSAM enhanced this by introducing buffering and label processing, allowing queued I/O requests to improve throughput for sequential datasets on DASD, though it required careful buffer allocation to avoid storage overflows in low-memory setups. For low-level direct access, the EXCP macro enabled programmers to issue custom channel programs directly to DASD controllers, bypassing higher-level access methods for optimized control over seek and transfer operations, particularly useful in performance-critical batch jobs. DOS/360 organized DASD data into partitioned datasets using the Partitioned Access Method (), where multiple sequential files or members (e.g., load modules or ) shared a single DASD extent, managed via a for member lookup. Addressing within these datasets employed the 3-byte relative track (TTR) scheme, specifying a relative number (TT) and record number (R) from the dataset's starting extent, facilitating direct positioning without full cylinder-head-record calculations. Primarily, DOS/360 and its early successors supported the Count Key Data (CKD) format for DASD, where records included count fields for length and optional keys for indexing, enabling variable-length blocks on tracks. Later versions, such as DOS/VSE with Advanced Functions in the late , introduced Fixed Block Architecture (FBA) support for compatible SCSI-like devices, using fixed 512-byte blocks addressed by relative numbers to simplify I/O in virtual storage contexts while maintaining with CKD volumes. Limited multitasking in these systems restricted concurrent DASD access to a single task or supervisor-managed queues, prioritizing reliability over complex multiprogramming.

In OS/360 and Successors

OS/360, introduced by in 1966, represented a major advancement in operating systems for large-scale mainframes, providing multiprogramming capabilities and robust support for direct-access storage devices (DASD). This system and its successors, including (Multiple Virtual Storage) from 1974 and from 2000, emphasized efficient DASD access through specialized methods tailored for high-volume, shared environments. Access methods in this lineage built on channel-based I/O, enabling concurrent processing and device independence while natively supporting Count Key Data (CKD) formats on devices like the IBM 2311 and 2314 disk packs. Key access methods included Basic Sequential Access Method (BSAM) and Queued Sequential Access Method (QSAM) for sequential and direct processing of DASD data sets. BSAM offered low-level control using READ and WRITE macros for block transfers, requiring programmer-managed buffering and supporting fixed (F), variable (V), and undefined (U) record formats on direct-access volumes. QSAM extended BSAM with automated buffering (simple or exchange modes) via GET and PUT macros, optimizing overlap of I/O and computation in multiprogrammed settings while maintaining compatibility for update-in-place operations. For indexed access, the Indexed Sequential Access Method (ISAM) organized records by keys (1-255 bytes) across prime and overflow areas, using and indexes for rapid retrieval; it supported direct key-based GET and PUT operations, though reorganization was needed to manage overflow chains. DASD interactions relied on channel programs executed via Control Word Chains (CCWs), initiated through the EXCP for direct device control or integrated into higher-level methods. Addressing used a 3-byte relative track address (TTR) scheme, specifying a relative track number and record position relative to the data set start, enabling precise block location on CKD volumes; for example, and POINT macros in BSAM facilitated TTR-based repositioning. While CKD was native, Fixed Block Architecture (FBA) support emerged in successors like and through emulation layers, allowing compatibility with fixed-block devices via VSAM or utilities without altering core addressing. In the evolution to OS/VS1, OS/VS2, , and , the (VSAM), introduced in 1973, superseded ISAM by providing advanced indexing and clustering for DASD. VSAM organized data into clusters—combining index and data components managed by an integrated catalog—supporting key-sequenced (KSDS), entry-sequenced (ESDS), and relative record (RRDS) datasets with balanced-tree indexes for efficient random and , akin to structures. It used control intervals (512 bytes to 32 ) and areas for space management, including CI/CA splits to handle insertions dynamically, reducing reorganization needs compared to ISAM. Error handling incorporated (ECC) on DASD tracks, detecting and correcting single-burst errors in count, key, and data fields via check bytes; in environments, correctable errors were handled transparently by storage directors like the 3880, while uncorrectable ones triggered retries and in SYS1.LOGREC for . This framework ensured across the OS/360 lineage, scaling to modern features like extended addressability.

In Contemporary z Systems

In contemporary IBM z Systems, direct-access storage devices (DASD) are integral to the and operating environments of the 2020s, providing high-performance storage for mission-critical workloads through and support for (FCP)-attached solid-state drives (SSDs) and hard disk drives (HDDs). These systems leverage the DS8000 series, which emulates traditional DASD volumes while enabling FCP connectivity for open-systems storage integration, allowing to access SSD-based arrays for enhanced I/O throughput in hybrid configurations. Access methods in these environments emphasize and automated management, with Extended VSAM providing scalable handling and VSAM Record Level Sharing (RLS) enabling concurrent read/write access across multiple systems in a Parallel Sysplex. VSAM RLS, introduced to support multisystem sharing, integrates with the Coupling Facility for lock management, allowing applications like and batch jobs to share VSAM without serialization overhead. Complementing this, System Managed Storage () automates DASD allocation by defining storage groups, classes, and management policies, dynamically selecting volumes based on data attributes and performance needs to optimize space utilization and I/O efficiency. Addressing in modern z Systems uses an 8-byte extended format (MBBCCHHR), where M denotes the model, BB the , CC the , and HR the head and , supporting volumes exceeding 65,520 cylinders via Extended Addressability (EAV) for capacities up to 1.18 exabytes per . This scheme facilitates precise location in virtualized environments, particularly for database integration; for instance, for stores table spaces and indexes on DASD volumes managed through , leveraging FBA or CKD formats for efficient query processing and recovery. Performance tuning often incorporates zHyperWrite, a caching that enables parallel synchronous writes to primary and secondary in Peer-to-Peer Remote Copy (PPRC) setups, significantly reducing log write latency for Db2 transactions in mirrored configurations. Emerging capabilities extend DASD functionality to hybrid cloud architectures, where z/OS supports bursting workloads to remote storage pools via DFSMS integration with Object Storage, enabling automatic tiering of less frequently accessed data sets while maintaining Sysplex-wide visibility. Security is bolstered by (and later 140-3) compliant on DS8000 arrays, providing data-at-rest protection through hardware-based () keys managed at the drive level, ensuring compliance for regulated industries without impacting I/O performance.

Terminology and Addressing

Key Concepts and Terms

In direct-access storage devices (DASD), a logical record represents the basic unit of user as perceived by applications and operating systems, consisting of related processed as a single entity, such as a line of text or a database entry. In contrast, a physical is the smallest unit of data transfer between the device and the host system, grouping one or more logical records along with any necessary headers or for storage efficiency on the physical medium. A refers to the circular path along the surface of a rotating disk platter where is magnetically recorded, spanning a full 360° revolution and capable of holding multiple records or blocks. A is formed by the vertical alignment of tracks at the same radial position across all platters in a multi-platter , allowing simultaneous access by multiple read/write heads to minimize seek times during operations. Key physical components of DASD include the , which is the motorized shaft that rotates the stack of disk platters at a constant speed, typically measured in revolutions per minute (RPM), to enable data access. The head assembly consists of the read/write heads mounted on actuator arms that position over specific tracks to perform data transfer, often operating in close proximity to the platter surfaces within a sealed environment to prevent contamination. In cases where a record exceeds the capacity of a single track, it may use overflow, extending the data across subsequent tracks while maintaining logical continuity through indexing or chaining mechanisms. In environments, a serves as the equivalent of a in other systems, defined as a named collection of related logical records stored and retrieved via an assigned identifier, supporting various organizations like sequential or indexed. A , on the other hand, denotes the physical storage unit, such as a or head-disk assembly (HDA), which can contain multiple datasets and is mounted as a single addressable entity for system access. Performance in DASD is characterized by , the time delay before data transfer begins, comprising seek time (positioning the head to the target and ) plus rotational delay (waiting for the desired sector to rotate under the head, averaging half a revolution). Throughput for operations is commonly measured in operations per second (), quantifying the device's capacity to handle non-sequential reads and writes, which is critical for workloads involving scattered .

Addressing Schemes

In IBM's early direct-access storage device (DASD) systems, addressing schemes evolved to accommodate varying levels of device complexity and capacity. For DOS/360, locations on DASD volumes were primarily specified using a 3-byte relative address known as TTR (Track--), where the first two bytes represented the relative number and the third byte indicated the position within that track. This scheme facilitated efficient access in smaller-scale environments by abstracting the physical geometry into relative positions from the start of a . With the introduction of OS/360, addressing shifted to a 4-byte absolute track format called CCHH (Cylinder-Head), which directly encoded the number (two bytes) and head number (two bytes), enabling precise specification of physical locations on multi-cylinder, multi-head devices. This absolute format supported larger volumes and was integral to channel programs that interacted with DASD hardware. As DASD capacities grew, particularly with Extended Addressability Volumes (EAV) in , the addressing scheme expanded to an 8-byte absolute format, MBBCCHHR (model byte, device bytes, , head, record), to handle volumes exceeding 65,520 cylinders by incorporating device model identification, device-specific bytes, and extended addressing. This format is employed in modern environments using the (FCP) for SCSI-based DASD, allowing compatibility with both traditional CKD (Count Key Data) and FBA (Fixed Block Architecture) devices while supporting up to 2.1 billion tracks per volume. IBM DASD addressing distinguishes between absolute and relative modes to balance precision and portability. addressing, using formats like CCHH or MBBCCHHR, specifies exact physical locations on a , which is essential for low-level I/O operations but ties programs to specific hardware geometries. Relative addressing, such as TTR, expresses positions offset from the beginning of a or extent, promoting device independence by converting to absolute addresses at via routines. The Volume Table of Contents (VTOC), a specialized on each DASD , manages allocation by maintaining Data Set Control Blocks (DSCBs) that record extent locations in absolute or relative terms, enabling the Dynamic Allocation and Data Set Management (DADSM) routines to search, allocate, and deallocate space without duplicating physical tracks. Outside IBM's ecosystem, non-IBM DASD and general disk drives typically employ (LBA), a linear scheme that identifies data blocks by sequential integers starting from 0, with each block traditionally sized at 512 bytes, abstracting underlying physical structures like sectors and tracks for broader compatibility across protocols such as and .

References

  1. [1]
    What Constitutes a Disk System ? - IBM
    It was different from tape because it was a "direct access storage device" (the acronym DASD is still used today by some mainframe programmers). Tape was a ...
  2. [2]
    Direct Access Storage Devices (DASDs) - IBM
    Direct access storage devices (DASDs) are fixed or removable storage devices. Typically, these devices are rotating disk drives or solid state disks.
  3. [3]
    Direct Access Storage Device - an overview | ScienceDirect Topics
    1. The origins of DASD trace back to IBM's development of the Random Access Method of Accounting and Control (RAMAC) in the early 1950s, designed by IBM ...
  4. [4]
    [PDF] Disk storage access with DB2 for z/OS - IBM Redbooks
    Disk hardware has evolved significantly since IBM introduced its first direct access storage device (DASD) back in 1956, the IBM 350. Over the years, newer disk ...<|separator|>
  5. [5]
    What Is a Direct Access Storage Device (DASD) and How Is It Used?
    Dec 23, 2021 · A direct access storage device (DASD) is a secondary storage device that supports direct access to stored data. Learn more about DASD types ...
  6. [6]
    What Is Direct Access Storage Device (DASD)? - TD Dictionary
    What's in a name? When it comes to DASDs, the answer is everything. DASDs are the type of storage that keeps data on disks and other magnetic media.Missing: 360 | Show results with:360
  7. [7]
    Memory & Storage | Timeline of Computer History
    Each 2315 consisted of a magnetically coated, direct access disk encased in a plastic cartridge that easily fit into built-in disk drives. When it was inserted ...
  8. [8]
    [PDF] Secondary Storage The Memory Hierarchy Physical Disks
    May 14, 2001 · • sequential access: read bytes one at a time, in order. • direct access: random access, given block/byte number. • record access: file is ...<|separator|>
  9. [9]
    [PDF] 6. Storage and File Structures
    – sequential access, much slower access than disk. 1. – high capacity. 2. – tape can be removed from drive ; lower storage cost compared to disk. 1in 2011 ...
  10. [10]
    Hard Drive Seek Time: What It Means (And Why It Matters)
    May 2, 2023 · Modern hard drives have an average seek time of around 9 milliseconds (9ms), but high-performance drives can cut this down to as little as 4ms. ...
  11. [11]
    CS 537 - Disk Scheduling - cs.wisc.edu
    At 7200 RPM, the rotational latency is at worst 1/7200 minute (8.33 milliseconds) and on the average it is half that (4.17 ms). The heads and the arm that moves ...
  12. [12]
    LTO-9: LTO Generation 9 Technology | Ultrium LTO - LTO.org
    LTO-9 represents a 50% capacity boost over LTO–8 and a 1400% increase over LTO-5 technology launched a decade earlier, with transfer speeds of 400 MB/s (native) ...
  13. [13]
    Magnetic Tape Data Storage - Gillware Inc.
    Sep 26, 2022 · Average data access times for magnetic tape are 50 to 60 seconds as opposed to 5 to 10 milliseconds for hard drives due to the long length of ...
  14. [14]
    Tape vs Disk Backup: Advantages and Disadvantages - NAKIVO
    Jun 1, 2023 · In this post, we will look at advantages and disadvantages as well as use cases of performing backup to tape and disk-based backups.
  15. [15]
    [PDF] The 360 Revolution - IBM z/VM
    The result was the System/360 family of compatible mainframe computers, and a complementary array of pioneering innovations in storage, memory, and integrated ...
  16. [16]
    [PDF] The IBM System/360
    Apr 24, 2024 · • Be incompatible with all of the existing systems. Page 15. An ... a1401 on smaller 360 machines, 7080/7090 on larger machines. – 1401S ...Missing: replaced | Show results with:replaced
  17. [17]
    The IBM System/360
    Launched on April 7, 1964, the System/360 was so named because it was meant to address all possible types of users with one unified software-compatible ...
  18. [18]
    IBM 2311 disk drive - CHM Revolution - Computer History Museum
    The IBM 2311 Direct Access Storage Facility was introduced in 1964 for use throughout the System/360 family of computers. It stored up to 7.25 megabytes on ...
  19. [19]
    International Business Machines Corporation (IBM) data cell drive ...
    IBM's Data Cell Drive was withdrawn in 1975. Item Details. Date: 2004-12-08 (Made); Type: Document; Catalogue number: 102657934 ...Missing: discontinued | Show results with:discontinued
  20. [20]
    [PDF] Data Processing Techniques - Bitsavers.org
    Data File Handbook. Design. This manual is a basic primer that provides ... Direct Access Storage Device (DASD) •. Processing of Data Files - Input ...
  21. [21]
    [PDF] Introduction to the System z Hardware Management Console
    ... Count Key Data (CKD). DASD, organizes its data in tracks, which equate to the ... early system/360 machines that were announced in 1964. The parallel I ...
  22. [22]
    Track format - IBM
    Information is recorded on all DASD volumes in a standard format. This format is called count-key data (CKD) or extended count-key data (ECKD).Missing: structure | Show results with:structure
  23. [23]
    [PDF] z/OS DFSMS Using Data Sets - IBM
    Jun 18, 2025 · This document is intended for system and application programmers. This document is intended to help you use access methods to process ...
  24. [24]
    Volume layout and record formats on CKD Devices - IBM
    In addition to the record ID, the count area specifies the length in bytes of the key and data areas of the record. Key Area is an optional portion of the ...Missing: structure fields
  25. [25]
    CKD to fixed block mapping for optimum performance and space ...
    Data or records are generally stored in DASD in count-key-data (CKD) format or in fixed-block architecture (FBA) format. CKD record storage is desirable ...
  26. [26]
    [PDF] IBM - Direct Access Storage Devices and Control Units - 3310, 3330 ...
    Fixed-block architecture allows for the specifica- tion of storage space by groups of blocks and makes the space definition independent of tracks and cylinders.Missing: FBA history<|control11|><|separator|>
  27. [27]
    Input/output optimization and disk architectures: A survey
    The count key data architecture of current disks (e.g. IBM 3350, 3380) and the fixed block architecture of new products (IBM 3310, 3370) are compared. The ...<|control11|><|separator|>
  28. [28]
    None
    Below is a merged summary of the Fixed Block Architecture (FBA) from GC35-0033-6, consolidating all information from the provided segments into a comprehensive response. To maximize detail and clarity, I’ve organized key information into tables where appropriate (e.g., for devices, block sizes, addressing, and supported operating systems) and included a narrative summary with quotes, page references, and additional details. Since no thinking tokens are allowed, I’ve focused on directly synthesizing the content without additional analysis or inference.
  29. [29]
    Defining a special FBA device (FBA) - IBM
    You can use disk devices, known as Fixed Block Architecture (FBA) or Fixed Block (FB) devices. Devices of this type may be considered as a data bridge.Missing: 3310 3370 3375 3380
  30. [30]
    FICON and FCP storage adapters - IBM
    Fibre Channel Protocol (FCP) mode provides access to Small Computer System Interface (SCSI) devices, through single- or multiple-channel switches. Note that ...Missing: z | Show results with:z
  31. [31]
    [PDF] Fibre Channel Protocol for Linux and z/VM on IBM System z
    In this chapter we describe how to install and configure z/VM v5.2 to. FCP-attached SCSI disks. We explain the hardware and software configurations that were ...
  32. [32]
    Overview of z/VM Support for SCSI Devices - IBM
    z/VM supports SCSI FCP disk logical units (SCSI disks) for both system and guest use. SCSI disks can be used directly by a guest operating system.
  33. [33]
    [PDF] IBM System Storage DS8000 Host Attachment and Interoperability
    The z/VM operating system also provides the following support when using an FCP attachment: 򐂰 FCP-attached SCSI LUNs as emulated 9336 Model 20 DASD. 򐂰 1 TB ...
  34. [34]
    Year of the SAN? - IBM
    IBM announced the first SAN technology calledEnterprise Systems Connection (ESCON) way back in September 1990. This allowed multiplemainframe servers to connect ...
  35. [35]
  36. [36]
    [PDF] Introduction to the New Mainframe: IBM z/VSE Basics
    Mar 1, 2016 · This edition applies to Version 6 Release 1 of IBM z/VSE, Program Number 5686-VS6, and to all subsequent releases and modifications. Note: ...
  37. [37]
  38. [38]
    Overview of the BSAM and QSAM macro instructions - IBM
    Table 1 shows the functions performed by the BSAM and QSAM macro instructions when used for terminal I/O. Following the table are more detailed explanations ...Missing: 360 Supervisor EXCP
  39. [39]
    DCBE—(BDAM, BSAM, QSAM, BPAM, and EXCP) - IBM
    The BSAM or EXCP program can issue the NOTE and POINT macros with extended capacity mode. ... allows the system to process BSAM I/O requests more efficiently by ...
  40. [40]
    [PDF] Partitioned Data Set Extended Usage Guide - IBM Redbooks
    Because of the TTR addressing scheme, you cannot use the POINT macro across members. You must position to the first record in a member before using POINT to ...
  41. [41]
    [PDF] IBM System/360 Operating System: System Programmer's Guide
    3-byte relative track address (TTR). This address indicates the ... place them in a separate partitioned data set and concatenate this data set ...
  42. [42]
    Glossary terms for IBM DS8000.
    All write operations on fixed-block architecture (FBA) direct access storage devices (DASDs) are predictable. On count-key-data (CKD) DASD devices, a write ...Missing: DOS/ VS
  43. [43]
    [PDF] Untitled - Meulie
    This edition applies to Release 2 of VSE/Advanded Functions, program number 5746-XE8 and to all subsequent releases until otherwise indicated.<|control11|><|separator|>
  44. [44]
    [PDF] IBM System/360 Operating System Data Management
    This publication contains information concerning the data management facilities of the IBM System/360 Operating system. It.
  45. [45]
  46. [46]
    [PDF] VSAM Demystified - IBM Redbooks
    Aug 23, 2022 · ... records ... Variable relative record data set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31. 1.5.5 Linear ...
  47. [47]
    [PDF] IBM 3375 Direct Access Storage Description and User's Guide GA26 ...
    A block size must be a multiple of the length of fixed-length logical records. One procedure for arriving at a suitable block size is to start by selecting a ...Missing: FBA | Show results with:FBA
  48. [48]
    Direct access storage devices (DASD) - IBM
    3390 Model 2 and 3 DASD in 3380 track compatibility mode. Start of change 3390 volumes with a minimum size of 10016 cylinders or FBA devices with a minimum ...
  49. [49]
    Introducing SAN and FCP on Linux on z Systems - IBM
    Multiple operating system instances can share one FCP channel. Storage devices used in SANs are disk storage systems and tape libraries. A disk storage system ...Missing: SSDs 2020s
  50. [50]
    Overview of VSAM RLS - IBM
    VSAM RLS is a data set access mode that enables multiple address spaces, CICS application-owning regions on multiple systems, and batch jobs to access ...<|separator|>
  51. [51]
    VSAM Record Level Sharing (RLS) Overview - IBM
    Jun 12, 2025 · RLS enables VSAM data to be shared, with full update capability, between many applications running in many CICS Transaction Server for z/OS ( ...
  52. [52]
    [PDF] z/OS DFSMS Implementing System-Managed Storage - IBM
    Dec 13, 2020 · This book introduces system-managed storage, an IBM®® automated approach to managing storage resources. More importantly, it describes a ...
  53. [53]
    Device-Dependent Parameters - IBM
    The full disk address (DCBFDAD) field that indicates the location of the current record. The address is in the form MBBCCHHR. For the actual form of MBBCCHHR, ...
  54. [54]
    Db2 for z/OS Dual Logging: zHyperWrite and zHyperLink—are they ...
    Apr 13, 2022 · In Db2 10 for z/OS, zHyperWrite support was added to improve overall Db2 performance for customer shops that use Peer-to-Peer Remote Copy (PPRC ...Missing: caching tuning
  55. [55]
    [PDF] Creating IBM z/OS Cloud Services
    Resource pooling is considered an essential characteristic of cloud computing. The resources under consideration in the pooling and subsequent allocation ...Missing: DASD | Show results with:DASD
  56. [56]
    What's new - IBM
    All drives in the DS8000 G10 support Advanced Encryption Standard (AES), data-at-rest encryption, and are FIPS 140-3 compliant. Simplification. For the 10th ...Missing: DASD | Show results with:DASD
  57. [57]
    [PDF] IBM DS8000 Encryption for Data at Rest, Transparent Cloud Tiering ...
    Nov 17, 2024 · The DS8000 supports data encryption in systems that are equipped with Full Disk Encryption ... fips-140-2-level-3-compliancy-0. Description. Local ...Missing: DASD | Show results with:DASD
  58. [58]
    What is a data set? - IBM
    The term data set refers to a file that contains one or more records. The record is the basic unit of information used by a program running on z/OS. Any named ...
  59. [59]
    How is space allocated on DASD volumes? - IBM
    To allocate a data set using JCL, you specify the amount of space required in blocks, records, tracks, or cylinders.
  60. [60]
    Quick reference: Data set structure - IBM
    Records are either fixed length or variable length in a given data set. Traditional z/OS data sets have one of five record formats (abbreviated as RECFM): Fixed ...Missing: advantages limitations emulation
  61. [61]
    US5552950A - Direct access storage device with magneto-resistive ...
    A direct access storage device includes at least one disk mounted for rotation about an axis and having opposed disk surfaces for storing data.
  62. [62]
    Optional Parameters - IBM
    The records can be written onto overflow tracks, if required. Exchange buffering or BFTEK(E) cannot be used. U: The records are of undefined length. V: Shows ...<|control11|><|separator|>
  63. [63]
    What are DASD volumes and labels? - IBM
    DASD volumes are used for storing data and executable programs (including the operating system itself), and for temporary working storage.
  64. [64]
    Measuring I/O response time - IBM
    DASD response time is the elapsed time from the DIAGNOSE instruction at the start ... It includes seek time, any rotational delays, and data transfer time.
  65. [65]
    [PDF] IBM DS8900F Performance Best Practices and Monitoring
    A benefit of IBM Storage Insights Pro is its capability to analyze both distributed fixed block. (FB) and mainframe Count Key Data (CKD) workloads. When the ...
  66. [66]
  67. [67]
    Relative addresses - IBM
    For PDSEs, the relative track addresses ( TTR s) do not represent the actual track and record location. Instead, the TTR s are tokens that define the record's ...Missing: scheme | Show results with:scheme
  68. [68]
    Perform calculations and conversions with track addresses ... - IBM
    TRKADDR is an assembler macro that performs conversion and compare operations on DASD track addresses in the form CCCCcccH.
  69. [69]
    [PDF] z/OS DFSMS Using Data Sets - IBM
    Apr 6, 2021 · This document is about DFSMS using data sets, and applies to z/OS Version 2 Release 4 and later, with Part 1 covering all data sets.
  70. [70]
    [PDF] DFSMS V1.10 and EAV Technical Guide - IBM Redbooks
    ... VTOC index ... DASD channel programs and other programs that examine DASD channel programs or track addresses.
  71. [71]
    Converting a Relative Track Address to an Actual Track Address - IBM
    Convert a relative track address to the actual address by using a resident system conversion routine that can be called in 24 or 31 bit mode. The conversion ...Missing: scheme | Show results with:scheme
  72. [72]
    Allocating and Releasing DASD Space - IBM
    The DADSM routines allocate and release space by adding, deleting, and modifying the DSCBs and updating the VTOC index. When space is released, the scratch ...
  73. [73]
    What is logical block addressing (LBA)? | Definition from TechTarget
    Jun 21, 2023 · LBA is a simple linear addressing scheme to specify the locations of data blocks in storage devices to find these blocks or specified pieces of data.