Fact-checked by Grok 2 weeks ago

Network model

The network model is a type of database architecture that represents data as collections of records connected through predefined relationships, enabling each record to have multiple parent and child associations to model complex, many-to-many linkages in a graph-like structure. This model, formalized in the late by the , uses a composed of record types and set types to define the database structure, where sets act as pointers linking owner records (parents) to member records (children). Developed as an extension of the hierarchical model to address its limitations in handling non-tree structures, the network model gained prominence in the through the Database Task Group (DBTG) standard, which influenced commercial systems like Integrated Data Store (IDS) and . It supports navigational access via procedural queries that traverse links between records, making it suitable for applications requiring efficient handling of interconnected data, such as or inventories, though it demands detailed knowledge of the for effective querying. Despite its flexibility in representing real-world relationships, the model's complexity in schema design and maintenance contributed to its decline with the rise of the in the 1980s, which offered declarative querying and greater simplicity.

Core Concepts

Definition and Structure

The network model is a database architecture that organizes data in a graph-like structure, where records function as nodes and sets serve as directed edges to represent relationships between them. This approach facilitates the modeling of complex interconnections, including many-to-many relationships, by allowing records to participate in multiple linkages without the rigid parent-child hierarchy of tree structures. The fundamental structural principle of the network model revolves around owner-member relationships, in which one record type is designated as the owner (or ) of a set occurrence, and one or more other record types act as members (or children). Each set occurrence links a single owner to zero or more members, enabling flexible navigation across the data graph while maintaining directed associations that support efficient querying of interrelated . Unlike simpler models limited to single-parentage, this principle allows members to connect to multiple owners through different sets, thereby accommodating real-world scenarios with multifaceted dependencies. Key components of the network model include record types, which are structural definitions analogous to entities and comprise one or more data items (fields) that store specific attribute values, such as names or identifiers. Set types, in turn, specify the relationships between record types, implemented through pointer-based links that physically connect occurrences of owner and member records in storage. These pointers enable direct traversal from owners to members and, in some cases, vice versa, forming the backbone of data access. In graphical representations, the network model's structure is commonly illustrated via data-structure diagrams, which depict types as rectangular boxes and set relationships as arrows or lines indicating the direction from owner to member. These diagrams highlight the interconnected nature of the database, showing how pointers facilitate navigation paths akin to traversing a , thus providing a visual for understanding the overall .

Records, Sets, and Relationships

In the network model, is organized into records, which serve as the fundamental units of storage and retrieval. A record type defines the structure of a group of similar records, consisting of named items or fields that hold values, such as strings or numbers. Records are distinguished as logical or physical: logical records represent the conceptual view accessible to applications, while physical records handle the underlying storage, often implemented as blocks or pages in files. Within relationships, records are further classified as owner records, which act as parents in a set, or member records, which act as children linked to one or more owners. Sets form the core mechanism for expressing relationships between record types in the network model, defined as an ordered collection that links one owner record type to zero or more member record instances of another type, establishing a many-to-one . Set types specify this linkage, with ordering modes such as first-in, last-in, or sorted by a key field to determine the sequence of members. Single-parent sets restrict a member to exactly one owner occurrence, while multi-parent structures are achieved indirectly by allowing a member record type to participate in multiple set types, each with a different owner, thus enabling complex many-to-many relationships through intermediary records if needed. Navigation through these relationships relies on pointers embedded within records, forming circular linked lists or rings that connect an owner to its members for efficient traversal. indicators maintain the position during operations, tracking the current of a specific type, set type, or the entire run unit (a transaction-like scope), allowing commands to find and retrieve related data by moving forward, backward, or to the first/last position in a set. This pointer-based approach facilitates direct access without full scans, with each set occurrence represented as a self-contained . Several constraints ensure in sets. Uniqueness rules prohibit a member record from belonging to more than one occurrence of the same set type, preventing duplicates within a single while allowing participation in multiple set types. is enforced as one-to-many per set occurrence, with exactly one owner per set and variable members (zero or more), though system limits may cap the maximum number of members to manage storage. Additional options like mandatory membership require every member to connect to an owner, while optional allows standalone records.

Historical Development

Origins and Early Influences

The network database model emerged in the early 1960s, drawing foundational influences from , which provided a conceptual framework for representing complex interconnections between data entities as nodes and edges. This mathematical approach, developed in the 18th and 19th centuries but increasingly applied in by the mid-20th century, enabled more flexible modeling of relationships compared to linear or tree-like structures prevalent in earlier file systems. Charles Bachman, while working at , leveraged these graph-theoretic principles to create the Integrated Data Store (IDS) in 1963, marking the first direct-access database management system and laying the groundwork for the network model. IDS represented data as records linked through sets, allowing navigation across multifaceted associations in a graph-like manner. Early motivations for the network model stemmed from the growing demands of business computing in the 1960s, where organizations required integrated systems to manage intricate, many-to-many relationships in operational data—such as those in supply chains and production processes—that rigid file management techniques could not efficiently handle. At the time, data storage relied heavily on sequential tape systems or early navigational databases, but these lacked the ability to support shared data access across departments without redundancy. IBM's Information Management System (IMS), introduced in 1966 as a hierarchical model for the Apollo space program, further highlighted the limitations of tree-structured hierarchies, which struggled with non-parent-child linkages common in real-world business scenarios. Bachman, as a key figure, pioneered the model's development through IDS and its companion Integrated File System (FS), aiming to enable company-wide data sharing at GE's manufacturing divisions. His innovations, including data structure diagrams for visualizing relationships, earned him the 1971 ACM Turing Award for contributions to database technology. Initial adoption of the network model occurred in mid-1960s mainframe environments, particularly for manufacturing and inventory management applications where complex interdependencies between parts, suppliers, and production lines necessitated graph-based navigation. GE implemented IDS across its facilities to streamline appliance production data, demonstrating the model's practicality for large-scale, integrated operations on systems like the GE-600 series computers. This early use case influenced subsequent implementations at other industrial firms, establishing the network approach as a viable alternative for handling enterprise-scale data before broader standardization efforts.

CODASYL Standardization

The , established in 1959 by the U.S. Department of Defense to standardize programming languages such as , turned its attention to database management in the 1960s. Building on early influences like Charles Bachman's integrated (IDS) system, CODASYL formed the List Processing in 1965, which was renamed to the Data Base Task Group (DBTG) in May 1967, to develop specifications for a common database management system compatible with COBOL and other languages. The DBTG stabilized its membership in January 1969 under chairman A. Metaxides and published its first proposals that October, laying the groundwork for the network database model. The pivotal 1971 DBTG Report, released in April and reviewed by the CODASYL Programming Language Committee in May, formalized the network model by defining key components including the schema for overall database structure, subschema for user views, and data storage mechanisms using records and sets to represent complex relationships. This report incorporated 130 of 179 submitted proposals, emphasizing a three-level architecture to separate conceptual, external, and internal data representations. The June 1973 Journal of Development, produced by the Data Description Language Committee (DDLC) formed in 1971, provided updates refining these elements, particularly introducing privacy locks for controlling access to records and items, as well as module concepts for organizing subschemas to enhance security and modularity. Central to these standards were the (DDL) specifications for describing database structures independently of host languages, and the (DML) for operations like storing, retrieving, and updating data, both designed to ensure portability across diverse systems and vendors. These vendor-neutral features promoted interoperability, reducing proprietary lock-in and facilitating program migration. The standards significantly influenced the database industry, driving widespread adoption of network model systems in government agencies and large enterprises during the 1970s, as evidenced by commercial implementations like Cullinet's and other COBOL-integrated solutions that supported complex, many-to-many data relationships in mainframe environments. This peaked in the mid-1970s, with the standards enabling scalable data management for critical applications in sectors requiring robust, pointer-based navigation.

Technical Implementation

Data Definition Language

The (DDL) in the network model provides a formal syntax for defining the logical and physical structure of the database, primarily through , subschema, and storage specifications. The DDL establishes the overall database blueprint, including record types, data items, and set types that model relationships between records. Schema definition begins with the RECORD entry, which declares a record type and its constituent items. The basic is RECORD NAME IS record-name, followed by subentries for items using level numbers (e.g., 01 for the record, 02 for subgroups) and clauses like PICTURE for formatting or TYPE for data categories such as arithmetic, character string, or database key. For example:
RECORD NAME IS EMPLOYEE
  01 EMP-ID PICTURE IS "9(6)"
  01 EMP-NAME PICTURE IS "X(30)"
  01 SALARY TYPE IS [DECIMAL](/page/Decimal)(7,2).
Data items represent the smallest named units, such as numeric fields or strings, and can include aggregates like vectors via the OCCURS clause for repeating groups. Set types are defined with the SET entry to specify owner-member relationships, using like SET NAME IS set-name OWNER IS owner-record MEMBER IS member-record, optionally including ORDER (e.g., ASCENDING on a ) or membership rules (e.g., MANDATORY AUTOMATIC for DBMS-managed links). An example is:
SET NAME IS EMPLOYEE-DEPT
  OWNER IS [DEPARTMENT](/page/Department)
  MEMBER IS [EMPLOYEE](/page/Member)
  MANDATORY AUTOMATIC
  ORDER IS ASCENDING DEPT-NO.
This structure enforces the network's graph-like connections between records. The subschema DDL creates user-specific views by extracting a of the , allowing programs to access only relevant records, sets, and data items while hiding others for and simplicity. Defined with syntax like SUBSCHEMA NAME IS subschema-name WITHIN [SCHEMA](/page/Schema) schema-name PRIVACY KEY IS 'password', it supports modifications such as redefining vectors as fixed arrays or applying locks to restrict access. This enables controlled views without altering the underlying . Storage schema details the physical organization, starting with AREA entries like AREA NAME IS area-name [TEMPORARY], which divide the database into logical storage regions where records are assigned via a WITHIN clause in the RECORD definition (e.g., WITHIN MAIN-AREA). Pages serve as fixed-length physical units within areas, managed automatically by the DBMS for record placement. Indexing is handled through clauses like INDEXED in SET entries or SEARCH KEY in records, enabling efficient retrieval based on specified keys. Key DDL elements include locators, implemented as database keys (DBKEYs), which act as unique pointers to record occurrences for direct access; these are declared with TYPE IS DATA-BASE-KEY and managed by the DBMS. Calculated fields, or , support computed values for storage or access, using LOCATION MODE IS CALC USING key-fields in the RECORD entry to derive record positions dynamically (e.g., hashing on a name field). Rename clauses appear in subschemas to alias data items or records, such as redefining a field name for application-specific use without impacting the .

Data Manipulation Language

The (DML) in the network model, as defined by the Database Task Group (DBTG), is a procedural embedded within a host programming language such as or , enabling applications to navigate and manipulate records through explicit commands that manage currency pointers and status indicators. Unlike declarative query languages, it requires programmers to specify step-by-step operations, including loops and conditional checks, to traverse the complex graph of records and sets, reflecting the model's emphasis on direct pointer-based access for efficiency in hierarchical or many-to-many relationships. Navigation in the network model relies on commands that position currency indicators—pointers to the current record instance (CRU) or set occurrence—to facilitate traversal. The FIND command locates a specific record or set element, setting the appropriate currency; for example, FIND ANY CUSTOMER USING CUSTOMER-NAME retrieves the first matching record, while FIND OWNER DEPOSITOR positions to the owner record in the depositor set, and FIND NEXT MEMBER WITHIN DEPOSITOR advances to the subsequent member record linked to the current owner. Once positioned, the GET command transfers the current record's data into the program's user work area (UWA) for processing, as in GET CUSTOMER after a FIND operation. The READY command prepares database areas or realms for access, specifying modes like retrieval or update to enable subsequent operations on sets, ensuring controlled concurrency. Insertion operations use the command to add new records to the database, populating the UWA with data before execution; for instance, STORE [ACCOUNT](/page/Account) creates a new account record, automatically connecting it to an owner set if schema rules dictate mandatory membership. Set membership updates accompany this via CONNECT, which links the new record as a member to a specified owner, such as CONNECT [ACCOUNT](/page/Account) TO DEPOSITOR after storing an account under a customer. Deletion employs the ERASE command to remove the current record, with options like ERASE ALL [CUSTOMER](/page/Customer) recursively deleting the owner and all connected members; prior to erasure, DISCONNECT severs set links, e.g., DISCONNECT [ACCOUNT](/page/Account) FROM DEPOSITOR, to maintain without cascading deletes unless specified. Updates are handled by the MODIFY command, which alters items in the current after a positioning FIND and GET; for example, after FIND FOR UPDATE CUSTOMER and GET CUSTOMER, MODIFY CUSTOMER can change an address field, with the system updating currency pointers to reflect the modified instance. This requires explicit "for update" clauses in FIND to lock the , preventing concurrent modifications. The procedural essence of the DML demands application code to orchestrate navigation, often through iterative constructs like while loops checking status flags (e.g., DB-STATUS for success or end-of-set), contrasting sharply with declarative paradigms where queries abstract away pointer management. This approach, while verbose, allows fine-grained control suited to the network model's linked structures.

Comparisons and Applications

Differences from Other Models

The network database model differs fundamentally from the hierarchical model in its structural flexibility. While the hierarchical model organizes data into tree-like structures where each child record has exactly one , enforcing one-to-many relationships, the network model extends this to arbitrary graphs, permitting records to have multiple parents and supporting many-to-many relationships. For instance, in modeling organizational data, a hierarchical approach might assign an employee to a single department as the sole , whereas the network model allows that employee record to link to multiple department records simultaneously, reflecting real-world dual reporting lines without duplicating the employee data across trees. This generalization from trees to graphs enables the network model to represent more complex interconnections, though it requires explicit management of links that the hierarchical model avoids through its rigid parent-child . In contrast to the , the network model relies on physical pointers and explicit links between records for navigation, rather than logical associations via shared keys and table joins. Relationships in the network model are defined through sets—collections of owner-member record pairs—necessitating procedural traversal commands to follow these links, which can lead to if the same must be repeated to accommodate multiple pathways. The , however, employs techniques to eliminate such by storing in independent tables connected declaratively through primary and foreign keys, avoiding the need for pointers. Query paradigms further distinguish the two models: the network approach uses a procedural data manipulation language embedded in a host programming language, requiring developers to specify step-by-step navigation (e.g., finding an owner record and then iterating through its set members), without a built-in query optimizer to automate path selection. Relational databases, by comparison, support declarative queries in SQL, where users describe desired results and the system optimizer determines efficient join execution plans. This navigational rigidity in the network model contributes to lower data independence, as changes to physical pointer structures can invalidate application code, whereas the relational model's logical table abstraction insulates applications from storage details.

Modern and Legacy Usage

Despite the standardization of SQL and the dominance of management systems in the 1980s, which simplified and querying compared to the navigational access of network models, the network database approach saw a sharp decline in adoption. This shift was driven by relational systems' ability to handle complex queries more intuitively without requiring programmers to navigate explicit pointers and sets, leading to broader commercial success and easier maintenance. Network databases nonetheless persist in legacy mainframe environments, especially COBOL-based systems supporting mission-critical operations in sectors like banking and , where high-throughput remains essential. For instance, CA IDMS, a prominent CODASYL-compliant implementation now maintained by , continues—as of 2025—to manage structured data in financial applications requiring reliability and performance on Systems, with release notes updated in October 2025 confirming active support. Similar usage extends to for handling interconnected records in billing and , though often alongside modernization efforts to integrate with . Migrating from network models to relational databases presents significant challenges due to the intricate pointer-based relationships and set structures, which lack the logical independence of relational and often require extensive to map owner-member links accurately. Tools and methods, such as automated and wrapper-based , address these issues by capturing source data , resolving many-to-many complexities, and ensuring during conversion, but projects can still face high costs and risks from incomplete navigational logic translation. In modern contexts, the network model's emphasis on explicit connectivity has influenced hybrid integrations within graph databases, where concepts like record sets prefigure node-edge representations for complex relationship modeling, as seen in systems like that build on while echoing CODASYL's flexible navigation. Niche revivals appear in embedded systems, where the model's efficient, low-overhead pointer traversal suits resource-constrained environments, such as real-time devices combining network-like structures with relational features for optimized data access.

References

  1. [1]
    [PDF] Network Model - Database System Concepts
    The network model represents data as records and relationships as links between records, where each record contains fields with single data values.
  2. [2]
    Database Systems
    A network data model standard (DBTG) was defined, which formed the basis for most commercial systems during the 1970s. Indeed, in 1980 DBTG-based Cullinet was ...
  3. [3]
    [PDF] Guide on data models in the selection and use of database ... - GovInfo
    ably from model to model. 2.3.1 The Network Model. A network database definition may include specification of certain integrity constraints on the data ...
  4. [4]
    [PDF] Lecture 2 - CSC4480: Principles of Database Systems
    – Network Model is able to model complex relationships and represents semantics of add/delete on the relationships. – Can handle most situations for modeling ...
  5. [5]
    [PDF] Database Design - UDC
    Network Model (now also collectively called the navigational database models) have been applied in many data and information systems including knowledge ...<|control11|><|separator|>
  6. [6]
    [PDF] Chapter A: Network Model
    The DBTG CODASYL Model. ▫ All links are treated as many-to-one relationships ... the DBTG set depositor and this corresponding sample database. set ...
  7. [7]
    [PDF] APPENDIX
    The original CODASYL/DBTG report used COBOL as the host language. Regardless of the host programming language, the basic database manipulation commands of ...
  8. [8]
    [PDF] Historical Reflections How Charles Bachman Invented the DBMS, a ...
    During the late 1960s the ideas Bachman created for IDS were taken up by the Database Task Group of CO- DASYL, a standards body for the data processing ...
  9. [9]
    How Charles Bachman Invented the DBMS, a Foundation of Our ...
    Jul 1, 2016 · During the late 1960s the ideas Bachman created for IDS were taken up by the Database Task Group of CODASYL, a standards body for the data ...
  10. [10]
    The Origin of the Integrated Data Store (IDS): The First Direct-Access ...
    The integrated data store (IDS), the first direct-access database management system, was developed at General Electric in the early 1960s and is still in ...
  11. [11]
    Charles W Bachman - A.M. Turing Award Laureate
    His design for IDS, and formulation of the underlying concept of the network data model, were the most important influences on the group's final work. In 1973 ...
  12. [12]
    Charles Bachman | History of Computer Communications
    His work at GE in the early 1960's, building the first company-wide database management system, called Integrated Data Store (IDS), allowed users across the ...
  13. [13]
    CODASYL | IT History Society
    This was a consortium formed in 1959 to guide the development of a standard programming language that could be used on many computers. This effort led to the ...<|control11|><|separator|>
  14. [14]
    [PDF] NBS HANDBOOK 113 CODASYL Data Description Language
    "Moved that the Subschema and DML portion of the DBTG Report. (PLC item 7102, DBTG - 71001) be referred to the DBTG for modification in accordance with the ...
  15. [15]
    Codasyl | Encyclopedia.com
    During 1965–67 CODASYL established a Database Task Group (DBTG) to investigate and develop proposals for a common database management system to be used in ...
  16. [16]
    6 The Rise of Relational Databases | Funding a Revolution
    Whereas Codasyl was based on a network model of data, IBM's database used a hierarchical structure. (Cullinet Corporation provided a Codasyl-compatible ...Missing: widespread adoption
  17. [17]
    [PDF] CODASYL Data-Base Management Systems ROBERT W. TAYLOR
    Conf., on Data: Abstrac- tion, Definition, and Structure, ACM. SIGPLAN ... of the CODASYL DBTG proposal," in. Database Management, J. W. Klimbie and.
  18. [18]
    [PDF] Network Model - Database System Concepts
    the first database standard specification, called the CODASYL DBTG 1971 report. [CODASYL 1971]. Since then, a number of changes have been suggested to that.
  19. [19]
    [PDF] Oracle CODASYL DBMS™ Database Administration Reference ...
    Chapter 5. Describes in detail the syntax and semantics of the security schema data definition. Chapter 6. Describes the syntax and semantics of the DDL ...
  20. [20]
    [PDF] Hierarchical Model - Database System Concepts
    Hierarchical Model. In the network model, the data are represented by collections of records and relationships between data are represented by links. This ...
  21. [21]
    A Comparison of the Relational and CODASYL Approaches to Data ...
    A Comparison of the Relational and CODASYL Approaches to Data-Base Management. Authors: Ann S. Michaels.
  22. [22]
    A Timeline of Database History | Quickbase
    Relational database systems became a commercial success as the rapid increase in computer sales boosted the database market, and this caused a major decline in ...1970s · 1980s · 1990s
  23. [23]
    History of DBMS - GeeksforGeeks
    Jul 28, 2025 · ... (DBMS) were created to handle complex data for businesses in the 1960s. These systems included Charles Bachman's Integrated Data Store (IDS) ...
  24. [24]
    A brief history of databases: From relational, to NoSQL, to distributed ...
    Feb 24, 2022 · In the 1980s and '90s, relational databases grew increasingly dominant, delivering rich indexes to make any query efficient.
  25. [25]
    Mainframe to Distributed SQL, Part 1 - CockroachDB
    Oct 10, 2024 · IDMS is designed for high transaction processing performance, making it suitable for mission-critical applications in banking, insurance, and ...
  26. [26]
    [PDF] IDMS - Broadcom Inc.
    IDMS™ is a cost-effective, highly-reliable, proven, and secure database management system for IBM z Systems that delivers outstanding business value for ...
  27. [27]
    Breaking the mainframe barrier: AI-powered paths to legacy ... - UST
    Abstract: This paper explores the multifaceted challenges organizations face when migrating enterprise applications from mainframes to modern platforms.
  28. [28]
    Legacy and Future of Data Reverse Engineering - IEEE Xplore
    The first period, the eighties, was mainly devoted to solving the problem of migrating CODASYL databases, IMS databases and standard files to relational ...
  29. [29]
    [PDF] Migrating to Relational Systems: Problems, Methods, and Strategies
    In the map stage, source data model representing files and hierarchical or network databases is reverse-engineered into a conceptual or logical data model.
  30. [30]
    Wrapper-based System Evolution Application to CODASYL to ...
    Aug 6, 2025 · This paper reports on a recent data reengineering project, the goal of which was to migrate a large CODASYL database towards a relational ...
  31. [31]
    (PDF) Survey of graph database models - ResearchGate
    The main objective of this survey is to present the work that has been conducted in the area of graph database modeling, concentrating on data structures, ...
  32. [32]
    Network Database, Relational DB, and Graph DB Compared - Raima
    Rating 5.0 (5) The Integrated Database Management System (IDMS) used the CODASYL network model. Originally developed by B.F. Goodrich, since 1989 it was owned by Computer ...