Fact-checked by Grok 2 weeks ago

Information model

An information model is a formal representation of concepts, relationships, constraints, rules, and operations that specifies the semantics of data within a chosen domain of discourse, providing an abstract framework independent of specific technologies or implementations. Information models serve as foundational tools in computer science and information systems engineering, enabling the unambiguous description of information requirements to facilitate data sharing, interoperability, and efficient management across networked environments. They are typically developed using standardized modeling languages such as UML (Unified Modeling Language) or entity-relationship diagrams, which help organize real-world entities, their attributes, and interdependencies into structured formats. Key purposes include defining data structures for storage and retrieval, supporting system integration in domains like manufacturing and utilities, and ensuring consistent behavior in distributed systems. For instance, models like the Common Information Model (CIM) provide standardized definitions for management information in IT and enterprise settings, promoting vendor-neutral data exchange. Information models are generally classified into three levels: conceptual, which offers a high-level view of information needs without implementation details; logical, which details data relationships and semantics in a technology-agnostic structure; and physical, which specifies implementation-specific aspects for or applications. This hierarchical approach allows for progressive refinement from abstract requirements to practical deployment, often incorporating meta-models and common data dictionaries to enhance reusability and precision. In standards bodies such as IEC and ISO, information modeling emphasizes hierarchical organization with like data types and value ranges to support machine-readable . Applications span diverse fields, including statistical data exchange via frameworks and in sectors like insurance and energy.

Fundamentals

Definition

An information model is a structured of concepts, entities, relationships, constraints, rules, and operations designed to specify the semantics of within a particular domain or application. This serves as a blueprint for understanding and communicating the meaning of , independent of any specific or implementation details. Key characteristics of an information model include its abstract nature, which allows for an implementation-independent structure that can be realized using various technologies, and its emphasis on defining what information is required rather than how it is stored, processed, or retrieved. By focusing on semantics, these models enable consistent interpretation of data across systems and stakeholders, facilitating interoperability and shared understanding without delving into technical storage mechanisms. In contrast to data models, which concentrate on the physical implementation—such as database schemas, tables, and storage optimization—information models prioritize the conceptual semantics and underlying business rules that govern the data. This distinction ensures that information models remain at a higher level of , serving as a foundation for deriving more implementation-specific data models. For example, a healthcare information model might define entities like , , and treatments, along with their interrelationships and constraints (e.g., a must be linked to a record), without specifying underlying database structures or query languages.

Purpose and Benefits

models serve as foundational tools in , primarily to facilitate clear communication among diverse stakeholders by providing a shared, unambiguous of requirements and structures. This shared understanding bridges gaps between business analysts, developers, and end-users, ensuring that all parties align on the semantics and scope of the system from the outset. Additionally, they ensure consistency across applications by enforcing standardized definitions and constraints, support between heterogeneous systems through compatible exchange formats, and guide the progression from high-level requirements to concrete by mapping conceptual needs to technical specifications. The benefits of employing information models extend to practical efficiencies in system development and operation. By reducing ambiguity during requirements gathering, these models minimize misinterpretations that could lead to costly rework, fostering a more precise articulation of business rules and data flows. They enable reuse of established models and components across multiple projects, accelerating development cycles and promoting consistency in data handling. Furthermore, information models enhance by incorporating enforced semantics—such as defined relationships and validation rules—that prevent inconsistencies and errors in data entry and processing, ultimately lowering long-term maintenance costs through more robust, extensible architectures. Quantitative evidence underscores these advantages; for instance, studies on standardized information modeling approaches, such as those in (BIM) applications, demonstrate up to 30% reductions in overall development time due to streamlined design and integration processes. In broader information systems contexts, data models have enabled up to 10-fold faster implementation of complex logic components compared to traditional methods without such modeling. In agile methodologies, information models support iterative refinement of business rules by allowing flexible updates to the model without disrupting core data structures, thereby maintaining adaptability while preserving underlying integrity.

Historical Development

Origins in Data Management

The origins of information models can be traced to pre-digital efforts in organizing knowledge, such as the system developed by in 1876, which provided an analog framework for semantic categorization by assigning hierarchical numerical codes to subjects, thereby enabling systematic retrieval and representation of informational structures. This approach laid early groundwork for abstracting data meanings independent of physical formats, influencing later computational paradigms. In the , the limitations of traditional file systems—characterized by sequential on tapes or disks, high redundancy, and tight coupling to physical —prompted the emergence of structured to abstract logical representations from underlying , facilitating data portability and independence. This transition was exemplified by 's Information Management System (IMS), released in 1968, which introduced a hierarchical model organizing data into tree-like parent-child relationships to represent complex structures efficiently for applications like NASA's . Concurrently, the Conference on Data Systems Languages () Database Task Group published specifications in 1969 for the network model, allowing more flexible many-to-many relationships between record types and building on Charles Bachman's Integrated Data Store (IDS) concepts to enhance navigational data access. A pivotal advancement came in 1970 with Edgar F. Codd's seminal paper, "A Relational Model of Data for Large Shared Data Banks," which proposed representing data through (tables) with tuples and attributes, emphasizing to separate user views from physical and incorporating semantic structures via keys and to minimize . This model shifted focus toward declarative querying over procedural navigation, establishing foundational principles for information models that prioritized conceptual clarity and scalability in database systems.

Evolution in Computing Standards

The , developed in the late 1970s and formalized through the 1980s, established a foundational three-level modeling framework for database systems—comprising the external (user view), conceptual (logical structure), and internal (physical storage) schemas—that significantly influenced the standardization of models by promoting and abstraction. This architecture, outlined in the 1977 report of the ANSI/X3/SPARC , provided a blueprint for separating conceptual representations of data from implementation details, enabling more robust and portable modeling practices in standards. Its adoption in early database management systems helped transition models from ad-hoc designs to structured, standardized approaches that supported across diverse hardware and software environments. In the , the rise of object-oriented paradigms marked a pivotal shift in modeling, with the Object Data Management Group (ODMG) releasing its first standard, ODMG-93, which integrated semantic richness into database and by defining a common object model, (ODL), and bindings for languages like C++ and Smalltalk. This standard addressed limitations of relational models by incorporating , encapsulation, and complex relationships, fostering the development of object-oriented database management systems (OODBMS) that treated models as integral to application development. ODMG's emphasis on portability and semantics influenced subsequent standards, bridging the gap between data persistence and paradigms in enterprise computing. The 2000s saw information models evolve further through the proliferation of XML for data exchange and the emergence of web services, which paved the way for initiatives; notably, the W3C's (RDF), recommended in 1999, provided a graph-based model for representing and relationships in a machine-readable format, enhancing on the web. Building on RDF, the (OWL), standardized by W3C in 2004, extended information modeling capabilities with formal semantics for defining classes, properties, and inferences, enabling more expressive and reasoning-capable ontologies. These developments, rooted in XML's structured syntax, transformed information models from isolated database schemas into interconnected, web-scale frameworks that supported automated knowledge discovery and integration across distributed systems. As of 2025, recent advancements have integrated techniques into information modeling, particularly through tools like Protégé for . Protégé, originally developed at , supports plugins that enable AI-assisted development and enrichment of ontologies, such as generating terms and relationships from data sources. This integration aligns with broader standards efforts, including those from W3C, to ensure AI-enhanced models maintain compatibility and verifiability, with applications in domains like .

Core Components

Entities and Attributes

In information modeling, entities represent the fundamental objects or concepts within a domain that capture essential aspects of the real world or abstract structures. An entity is defined as a "thing" which can be distinctly identified, such as a specific , , or . These entities are typically nouns in the , like "" in a () system, and they form the primary subjects about which information is stored and managed. Entities are distinguishable through unique identifiers, often called keys, which ensure each instance can be referenced independently. Attributes are the descriptive properties or characteristics that provide detailed about , specifying what can be associated with each instance. Formally, an attribute is a that maps from an into a value set or a of value sets, such as mapping a person's name to a of name values. Attributes include elements like customer ID (an ), name (a ), and address (a composite ), with specifications for (e.g., , , ), (indicating whether single-valued or multivalued), and optionality (whether the attribute must have a value or can be null). These properties ensure attributes accurately reflect the semantics of the domain while supporting and query efficiency. Attributes are classified into several types based on their structure and derivation. Simple attributes are atomic and indivisible, such as a customer's ID or , holding a single, basic value without subcomponents. In contrast, complex (or composite) attributes consist of multiple subparts that can be further subdivided, like an address composed of street, city, state, and . Derived attributes are not stored directly but computed from other attributes or , such as derived from birthdate using the current date, which avoids redundancy while providing dynamic values. Multivalued attributes, like a customer's multiple phone numbers, allow an entity to hold a set of values for the same property. A representative example is a information model featuring a "" entity. This entity might include attributes such as (a simple, single-valued key attribute of string type, mandatory), (simple, single-valued string, mandatory), author (composite, potentially multivalued to handle co-authors, optional for anonymous works), and publication year (simple, single-valued integer, mandatory). In a basic entity-relationship sketch, the "" entity would be depicted as a labeled "Book," with ovals connected by lines representing attributes like , , and author, illustrating how these properties describe individual book instances without detailing inter-entity connections.

Relationships and Constraints

In information models, relationships define the interconnections between , specifying how instances of one associate with instances of another. These relationships are categorized by , which indicates the number of instances that can participate on each side. A relationship occurs when exactly one instance of an is associated with exactly one instance of another , such as a linking two persons where each is paired solely with the other. One-to-many relationships allow one instance of an to relate to multiple instances of another, but not vice versa; for example, a department may employ multiple workers, while each worker belongs to only one department. Many-to-many relationships permit multiple instances on both sides, as seen when customers place orders for multiple products, and each product appears in multiple customer orders. To resolve many-to-many relationships while accommodating additional attributes on the association itself, are introduced. These entities act as intermediaries, transforming the many-to-many link into two one-to-many relationships and enabling the storage of descriptive data about the connection. For instance, in an system, an "order details" associative entity links customers and products, capturing attributes like quantity and price for each specific item in an order. Constraints in information models enforce rules that maintain and consistency across relationships and entities. Referential integrity ensures that a value in one references a valid value in a related , preventing orphaned records; for example, an order's customer ID must match an existing . Uniqueness constraints, part of entity integrity, require that uniquely identify each instance and prohibit values in those keys, guaranteeing no duplicates or incomplete identifiers. Business rules impose domain-specific conditions, such as requiring an employee's age to exceed 18 for eligibility in certain roles, which are checked to align data with organizational policies. Semantic constraints extend these by incorporating and contextual rules, often addressing complex scenarios like . Temporal constraints, for example, use valid-from and valid-to dates to define the lifespan of entity relationships or attributes, ensuring that historical versions of data remain accurate without overwriting current states; this is crucial in models tracking changes over time, such as employee assignments to projects. These constraints collectively safeguard the model's semantic fidelity, preventing invalid states that could arise from ad-hoc updates.

Modeling Languages and Techniques

Conceptual Modeling Approaches

Conceptual modeling approaches encompass high-level, informal techniques employed in the initial phases of information model development to capture and structure without delving into formal syntax or implementation details. These methods prioritize and to elicit key concepts, ensuring the model reflects real-world semantics accurately. Common approaches include brainstorming sessions, analysis, and domain storytelling, each facilitating the identification of entities, relationships, and processes in an accessible manner. Brainstorming sessions involve group activities where participants generate ideas spontaneously to explore domain requirements, often using to map out potential entities and interactions. This supports system-level by identifying tensions and key drivers early, as demonstrated in case studies from the energy sector where engineers used brainstorming to enhance awareness and communication in conceptual models. analysis focuses on describing business scenarios to pinpoint critical entities and their roles, starting from operational narratives to define the foundational elements of an information model. By analyzing how actors interact with the system to achieve goals, this method ensures the model aligns with business needs, forming a bridge to more detailed representations. Domain storytelling, a collaborative workshop-based , uses visual narratives with actors, work objects, and activities to depict concrete scenarios, thereby clarifying domain concepts and bridging gaps between experts and modelers. This approach excels in transforming into explicit models, as seen in contexts where it supports agile requirement elicitation. Key techniques within these approaches include top-down and bottom-up strategies for structuring the . The top-down method begins with broad, high-level domain overviews, progressively refining into specific concepts, which is effective for strategic alignment in enterprise modeling. In contrast, the bottom-up technique starts from concrete data instances or tasks, aggregating them into generalized entities, allowing for situated knowledge capture from operational levels. Tools such as mind mapping aid conceptualization by visually organizing ideas hierarchically around central themes, facilitating the connection of related concepts and simplifying domain exploration. This radial structure helps in brainstorming and initial entity identification, making complex information more digestible. For incorporating dynamic aspects, with BPMN can be integrated informally to outline event-driven behaviors alongside static entities, using flow diagrams to represent state changes and interactions without full formalization. This enhances the model's ability to capture temporal and causal relationships in information flows. Best practices emphasize iterative validation with stakeholders to ensure semantic accuracy, involving repeated workshops and feedback loops to refine concepts based on domain expertise. Such cycles, as applied in stakeholder-driven modeling for systems, build and , reducing misalignment risks before transitioning to formal languages.

Formal Languages and Notations

Formal languages and notations enable the precise and unambiguous specification of information models by providing standardized syntax for describing structures, semantics, and constraints. These tools bridge conceptual designs with implementable representations, facilitating communication among stakeholders and in software tools. Key examples include diagrammatic and textual approaches tailored to relational, object-oriented, and domain-specific needs. The Entity-Relationship (ER) model, proposed by Peter Chen in 1976, serves as a foundational notation for expressing relational semantics in information models. It represents entities as rectangles, attributes as ovals connected to entities, and relationships as diamonds linking entities, with constraints indicated by symbols on relationship lines. This visual notation emphasizes data-centric views, making it particularly effective for design where simplicity in relational structures is prioritized. Unified Modeling Language (UML) class diagrams provide a versatile notation for object-oriented information models, as defined in the OMG UML specification. Classes are depicted as boxes with compartments for attributes, operations, and methods; associations are lines connecting classes, often with multiplicity indicators; and generalizations enable inheritance hierarchies. UML class diagrams extend beyond basic relations to include behavioral elements, supporting comprehensive software system modeling. Other notable notations include EXPRESS, a formal textual language standardized in ISO 10303-11 for defining product data models in manufacturing and engineering contexts. EXPRESS supports declarative schemas with entities, types, rules, and functions, allowing machine-interpretable representations without graphical elements. Object-Role Modeling (ORM), developed by Terry Halpin, employs a fact-based approach using textual verbalizations and optional diagrams to model information as elementary facts, emphasizing readability and constraint declaration through roles and predicates. These notations commonly incorporate features such as for subtype hierarchies, aggregation for part-whole relations without ownership, and for stronger ownership semantics, as prominently supported in UML class diagrams. Visual representations, like those in ER and UML, aid human interpretation through diagrams, while textual formats like EXPRESS enable precise, computable specifications suitable for exchange standards.
NotationProsCons
ER ModelSimpler syntax focused on relational data; easier for database designers to learn and apply in data-centric tasks.Limited support for behavioral aspects and complex object hierarchies; less adaptable to beyond databases.
UML Class DiagramsBroader applicability to object-oriented systems; integrates structural and with rich semantics like inheritance and operations.Steeper learning curve due to extensive features; potential for over-complexity in pure data modeling scenarios.

Standards and Frameworks

International Standards

The (ISO) and the (IEC) have developed key standards for information models, emphasizing metadata management and interoperability. ISO/IEC 11179, first published in 1999 with its second edition in 2004 and updated to its fourth edition in 2023, defines a framework for metadata registries (MDRs) that standardizes the semantics of data elements to ensure consistent representation and sharing across systems. This multi-part standard includes specifications for conceptual data models (Part 3) and registration procedures (Part 6), enabling organizations to register and govern for enhanced data understandability. Complementing this, ISO/IEC 19763, known as the Metamodel Framework for Interoperability (MFI) and revised in 2023 for its framework component (Part 1), provides a series of metamodels to register and map diverse models, including ontologies and process models, facilitating semantic alignment between heterogeneous systems. Other international bodies have contributed foundational and evolving frameworks for information modeling. In the 1980s, the National Institute of Standards and Technology (NIST) advanced Information Resource Management (IRM) through publications like Special Publication 500-92, which outlined strategies for managing information as a strategic asset, influencing modern data governance practices. This evolved into contemporary NIST frameworks that support interoperable information systems. Additionally, the (W3C) introduced (RDFS) in 2004, with updates to version 1.1 in 2014 and RDF 1.2 in 2025, offering a for describing RDF-based data models on the web, enabling extensible schemas for and applications. These standards promote cross-system compatibility by providing neutral, reusable structures for defining entities, relationships, and semantics, reducing integration barriers in diverse environments. For instance, the (HL7) (FHIR), initiated in 2011, leverages modular resource-based information models aligned with ISO principles to enable seamless exchange of healthcare data across systems worldwide. As of 2025, emerging developments integrate technology with these semantic models to enhance security and immutability, such as through encodings in smart contracts for verifiable , as explored in recent research on semantic frameworks.

Industry-Specific Models

Industry-specific models are tailored conceptual frameworks designed to address the unique data requirements, processes, and regulatory demands of particular sectors, enabling standardized representation and exchange of domain-specific . These models extend general standards by incorporating sector-unique entities, relationships, and semantics, facilitating among systems and stakeholders within vertical industries such as , , , and healthcare. In the sector, the Common Information Model (CIM), developed by the (DMTF) starting in 1997, serves as a foundational object-oriented for representing managed elements like hardware, software, and networks in enterprise environments. CIM provides a vendor-neutral vocabulary and structure for management, supporting protocols like Web-Based Enterprise Management (WBEM) to enable consistent across diverse systems. For the insurance industry, the Association for Cooperative Operations Research and Development (), established in 1970, develops standards for electronic data exchange, including XML-based models that define core entities such as policies, claims, and parties involved in insurance transactions. These standards promote efficient, automated workflows by standardizing data formats for property, casualty, life, and operations, reducing errors in inter-company communications. In the financial sector, , first published in by the (ISO), establishes a universal messaging standard for payments and securities, using a to define structured semantics for transactions, including remittance details and party identifications. This model supports rich, extensible data exchange across global payment systems, enhancing automation and reducing reconciliation issues in cross-border finance. The healthcare domain relies on , released in 2002 through the merger of SNOMED RT and the UK's Clinical Terms Version 3, as a comprehensive, multilingual clinical terminology model maintained by SNOMED International. SNOMED CT organizes medical concepts hierarchically, covering diagnoses, procedures, and , to support electronic health records and clinical decision-making with precise, coded representations. These sector-specific models deliver benefits such as improved and by embedding domain rules and privacy controls; for instance, in the , adaptations of models like in healthcare and in finance align with GDPR requirements for secure handling of , ensuring management and minimization principles are integrated into data flows.

Applications and Use Cases

Database Design

Information models provide the foundational blueprint for database design, enabling the systematic transformation of abstract business requirements into efficient, scalable database schemas. The mapping process starts with the conceptual information model, which identifies core entities, attributes, and relationships without regard to specific database technology, serving as a high-level abstraction of the data domain. This model is then refined into a logical data model, where entities become tables, attributes translate to columns with defined data types, and relationships are implemented as primary and foreign keys, ensuring referential integrity. The logical model addresses implementation-agnostic structures, such as normalization rules derived from the conceptual constraints, before advancing to the physical data model. In the physical design phase, the information model informs optimizations like indexing strategies on frequently queried attributes and partitioning schemes based on cardinalities, which enhance and manage large-scale volumes. For example, indexes may be applied to foreign keys representing many-to-one to accelerate joins, while storage allocations align with entity volumes projected from the model. This iterative mapping ensures that the resulting remains faithful to the original information model while adapting to and software constraints, such as those in management systems (RDBMS). Tools facilitate this process by automating transformations, reducing manual errors and accelerating development cycles. Normalization is a critical step in logical , directly informed by the constraints and dependencies outlined in the information model, to minimize and prevent anomalies during insert, update, or delete operations. (1NF) enforces atomicity by ensuring each attribute holds indivisible values and eliminates repeating groups, aligning with the model's definition of simple attributes. (2NF) builds on 1NF by removing partial dependencies, where non-key attributes depend only on the entire , often resolving issues in models with composite keys derived from entity relationships. (3NF) further eliminates transitive dependencies, ensuring non-key attributes depend solely on the , which preserves the integrity of attribute constraints from the . These forms collectively reduce overhead and support scalable querying, though higher forms like Boyce-Codd Normal Form (BCNF) may be applied selectively for complex dependencies. Reverse engineering complements forward by deriving models from existing databases, particularly in systems where is incomplete or outdated. This process involves analyzing physical schemas—such as table structures, constraints, and triggers—to reconstruct entities and relationships at the conceptual level, often using tools to infer business rules from data patterns and . For relational databases, techniques include extracting entity-relationship diagrams (ERDs) by identifying primary keys as entities and foreign keys as relationships, while handling denormalized tables through dependency analysis to propose normalized equivalents. In practice, this enables modernization efforts, such as migrating COBOL-based systems to modern RDBMS, by revealing hidden semantics without disrupting operations; studies indicate recovery of a significant portion of original intent in well-structured databases. Challenges arise with poorly documented systems, where validation supplements to ensure the reconstructed model accurately reflects intended flows. Computer-Aided Software Engineering (CASE) tools play a pivotal role in automating schema generation from information models, streamlining the mapping from conceptual to physical designs. ERwin Data Modeler, a widely adopted tool, supports forward engineering by generating DDL scripts directly from logical models, incorporating normalization checks and physical optimizations like index creation based on model annotations. Users define the conceptual model via ERDs, then use built-in wizards to produce database-specific schemas for platforms such as Oracle or SQL Server, with features for comparing models against existing databases to propagate changes. This automation not only enforces consistency with the information model but also integrates with version control, significantly reducing design time in enterprise environments. Other CASE tools follow similar paradigms, emphasizing bidirectional synchronization to maintain alignment between evolving models and deployed schemas.

Enterprise Architecture

In enterprise architecture, information models serve as foundational tools for aligning with organizational strategies, enabling the creation of coherent architectures that support and across large-scale enterprises. These models define the structure, semantics, and relationships of entities, ensuring that IT systems reflect business requirements and facilitate , , and . By providing a shared understanding of assets, they help organizations manage complexity in distributed environments, from legacy systems to cloud-native infrastructures. A prominent incorporating models is (TOGAF), whose content metamodel—evolving since the 1990s—utilizes these models to organize architectural artifacts such as views, deliverables, and building blocks. In TOGAF, models specify entities and relationships across domains like , , applications, and , promoting reuse and consistency in artifact development to bridge strategic goals with tactical implementations. This metamodel ensures that content is traceable and adaptable, supporting iterative enterprise transformations. Information models further enhance integration in (SOA) and by establishing shared semantics that enable seamless communication and interchangeability among components. In SOA, they define common data structures and interfaces to compose services into cohesive processes, while in , semantic models—often based on ontologies or RDF—address challenges in dynamic, containerized environments by clarifying service capabilities and data mappings. For instance, a semantic model can classify microservice instances and their clusters, allowing for modular deployment and fault-tolerant scaling without semantic mismatches. In large enterprises such as banks, information models support regulatory reporting compliance, exemplified by their application in adhering to frameworks through semantic approaches to and risk reporting. Under principles (integral to ), banks employ centralized data dictionaries and models to ensure data accuracy, timeliness, and auditability for risk calculations. A proposed validated with Portuguese banking executives outlines phases including and quality controls, demonstrating how semantic models unify disparate systems for compliant reporting while reducing manual reconciliation efforts. Adopting () approaches, which leverage information models, yields measurable returns on investment, including up to 30% improvements in development productivity and faster times, with ROI often achieved within 12 months. These gains stem from automated and reduced rework in integrating components, allowing enterprises to accelerate deployment cycles and lower maintenance costs in complex IT landscapes. As of 2025, information models are increasingly applied in emerging areas such as AI-driven systems and semantic data layers, enabling advanced products and simplifying complex business problems through enhanced and .

Challenges and Future Directions

Current Limitations

One significant limitation in information modeling is scalability when applied to environments, where the , , and of can overwhelm traditional techniques, often requiring dimension reduction or regularization methods to maintain performance. Handling evolving semantics in dynamic domains poses another challenge, as semantic process models must adapt to changing meanings and contexts, leading to difficulties in label , refactoring, and ensuring consistency between model elements and textual descriptions. In particular, ambiguous labels and the need to map evolving terms across fragments can result in incomplete or inconsistent representations, especially in rapidly changing fields like processes or AI-driven systems. Common pitfalls include over-abstraction, which often leads to poor usability by creating models that are too high-level or complex, causing misunderstandings among stakeholders and inefficiencies in implementation. For instance, mixing conceptual and physical modeling layers prematurely introduces unnecessary details, hindering clarity and . Similarly, integration conflicts between heterogeneous models exacerbate these issues, with semantic, structural, and syntactic discrepancies across data sources requiring extensive and efforts to avoid inconsistencies. Security gaps remain prevalent, particularly in modeling privacy constraints following the GDPR's enactment in 2018, where inadequate incorporation of data lifecycle tracking and can expose sensitive information to risks like unauthorized access or failure to support rights such as . Many models fail to embed principles, leading to challenges in isolating and ensuring compliance in complex environments. Empirical evidence underscores these limitations; for example, the 2024 Trends in Data Management report by DATAVERSITY indicates that 68% of organizations grapple with data silos, which contribute to outdated or misaligned information models. These trends highlight the need for ongoing updates, with emerging semantic layers offering potential mitigations as explored in future directions. One prominent emerging trend in information modeling is the integration of , particularly techniques that automate the generation of models from inputs. Tools such as Knowledge Catalog employ pretrained foundation models, including fine-tuned versions like granite-8b and , to enrich assets with AI-generated descriptions, terms, and alignments derived from contextual text. This approach facilitates and governance by expanding asset names and assigning semantic terms with high accuracy, even without exact matches, thereby streamlining the creation of information models for AI-driven applications. Semantic technologies are advancing through the proliferation of knowledge graphs, which have evolved from static structures like Google's 2012 to dynamic, models that incorporate text, images, and other types. As of 2025, these graphs enable enhanced reasoning and in AI systems, as seen in frameworks that synergize temporal knowledge graphs with large language models to handle complex, real-world scenarios such as life sciences applications. Google's , for instance, experienced 2.79% growth from 2024 to mid-2025 but underwent a significant "clarity cleanup" in June 2025, removing over 3 billion entities (a 6.26% contraction) to improve quality and AI-powered search accuracy through refined entity resolution. This evolution addresses limitations in traditional graphs by fusing diverse modalities for richer semantic representations. Blockchain and decentralized models are gaining traction for ensuring tamper-proof semantics in supply chains, leveraging distributed ledgers to create immutable records of flows. Semantic-enhanced platforms facilitate flexible object discovery and by validating smart contracts through consensus, allowing stakeholders to verify without central authorities. In contexts, this technology reconstructs sharing architectures to prevent tampering, as demonstrated in frameworks that use chains for secure, transparent data exchange across multi-tier operations. Looking ahead, forecasts that by 2030, 75% of work, including information modeling tasks, will be done by humans augmented with (with 25% done by AI alone and 0% without AI), underscoring the shift toward collaborative modeling paradigms, where AI handles and humans provide oversight, potentially transforming adoption rates in environments.

References

  1. [1]
    Information Modeling with JADN Version 1.0 - OASIS Open
    "An information model is a representation of concepts, relationships, constraints, rules, and operations to specify data semantics for a chosen domain of ...
  2. [2]
    Information Modeling: From Design to Implementation | NIST
    Sep 1, 1999 · How information models are used to define data requirements and how, in a practical application, information models enable information sharing ...
  3. [3]
    Information modeling - IEC TC 3
    It involves the process of representing and organizing information in a structured manner, enabling efficient storage, retrieval, and manipulation of data.
  4. [4]
    CIM Common Information Model - DMTF
    DMTF's Common Information Model (CIM) is developed and maintained by the CIM Forum. It provides a common definition of management information.
  5. [5]
    Information Model - an overview | ScienceDirect Topics
    Information models in computer science are classified into three primary levels: conceptual, logical, and physical models. 2.1 Conceptual Data Model. The ...
  6. [6]
    [PDF] Information Model - SDMX
    This document comprises the complete definition of the information model, with the exception. 397 of the registry interfaces. It is intended for technicians ...
  7. [7]
    [PDF] An overview of information modeling for manufacturing systems ...
    These modeling techniques are useful for improving the quality of a database design. An information model is a representation of concepts, relationships, ...
  8. [8]
    Information and Data Modelling | The Essential Project
    May 28, 2025 · An information model serves as a high-level, conceptual framework that defines the structure and interrelationships of data within a specific ...
  9. [9]
    DM2 - Information and Data - DoD CIO - Department of War
    An information model is part of a service description. 7) Data models are useful in knowing how to interact with a service and the capabilities it provides ...
  10. [10]
    Information Models in Healthcare - SNOMED CT Document Library
    Sep 16, 2025 · For example, they specify how a diagnosis is associated with a patient encounter, how a medication is linked to a prescription order, or how ...Missing: treatment | Show results with:treatment
  11. [11]
    10 Benefits of Data Models - Dataversity
    Apr 9, 2014 · You can use a data model to estimate the complexity of software, and gain insight into the level of development effort and project risk. You ...
  12. [12]
  13. [13]
    Data Modelling In An Agile World - Sandhill Consultants
    Rather than using a single model produced during the initial stages of a project, in an agile project the model is continually evaluated and updated, keeping ...<|control11|><|separator|>
  14. [14]
    [PDF] THE DEWEY DECIMAL CLASSIFICATION - OCLC
    Nov 12, 2008 · This article discusses the Dewey Decimal Classification's value proposition as a general knowledge organization system in terms of basic design ...
  15. [15]
    A Brief History of Data Modeling - Dataversity
    Jun 7, 2023 · Phase I took place from roughly the 1960s to 1999, and included the development of database management systems (DBMSs) – hierarchical databases ...
  16. [16]
    Information Management Systems - IBM
    For the commercial market, IBM renamed the technology Information Management Systems and in 1968 announced its release on mainframes, starting with System/360.
  17. [17]
    How Charles Bachman Invented the DBMS, a Foundation of Our ...
    Jul 1, 2016 · During the late 1960s the ideas Bachman created for IDS were taken up by the Database Task Group of CODASYL, a standards body for the data ...
  18. [18]
    A relational model of data for large shared data banks
    A relational model of data for large shared data banks. Author: E. F. Codd ... Published: 01 June 1970 Publication History. 5,614citation66,017Downloads.
  19. [19]
    Resource Description Framework (RDF) Model and Syntax ... - W3C
    Feb 22, 1999 · The broad goal of RDF is to define a mechanism for describing resources that makes no assumptions about a particular application domain, nor ...
  20. [20]
    OWL Web Ontology Language Reference - W3C
    Feb 10, 2004 · This document contains a structured informal description of the full set of OWL language constructs and is meant to serve as a reference for OWL users.Status of this document · Acknowledgments · Introduction · OWL document
  21. [21]
    Dynamic Retrieval Augmented Generation of Ontologies using ...
    Oct 17, 2024 · DRAGON-AI can generate textual and logical ontology components, drawing from existing knowledge in multiple ontologies and unstructured text sources.
  22. [22]
    The entity-relationship model—toward a unified view of data
    A data model, called the entity-relationship model, is proposed. This model incorporates some of the important semantic information about the real world.
  23. [23]
    [PDF] Chapter 6: Database Design Using the ER Model - CSE IIT KGP
    A subset of the attributes form a primary key of the entity set; i.e., uniquely identifying each member of the set. Page 10. ©Silberschatz, Korth and ...
  24. [24]
    [PDF] The entity-relationship model : toward a unified view of data
    A data model, called the entity-relationship model, is proposed. This model incorporates some of the important semantic information about the real world.
  25. [25]
    Associative entity - IBM
    An associative entity has always two and only two relationships defined for each entity involved in the association. These relationships are based on the ...
  26. [26]
    [PDF] A Relational Model of Data for Large Shared Data Banks
    Relational. Model and Normal. Form. 1 .I. INTR~xJ~TI~N. This paper is concerned with the application of ele- mentary relation theory to systems which provide ...
  27. [27]
    Capturing Temporal Constraints in Temporal ER Models
    In a temporal ER model, support for the specification of advanced temporal constraints would be desiderable, allowing the designer to specify, e.g., that the ...
  28. [28]
    Conceptual modeling to support system‐level decision‐making: An ...
    Nov 30, 2022 · Typical approaches for concept generation includes brainstorming, sketching, Morphology analysis, and TRIZ. A more extensive list of concept ...
  29. [29]
    Conceptual Data Model: It Starts with Business Use Cases
    Mar 20, 2025 · By leveraging business use cases to identify key entities and their relationships, analysts can create a comprehensive conceptual data model ...
  30. [30]
    [PDF] Domain Storytelling - Pearsoncmg.com
    Domain Storytelling is a collaborative, visual, and agile, narrative-based technique for domain modeling, showing who does what with whom, in what order, and ...
  31. [31]
    [PDF] Combining Top-down and Bottom-up Enterprise Modelling
    The use of interactive models is about discovering, externalizing, capturing, expressing, representing, sharing and managing enterprise knowledge.
  32. [32]
    Introduction to Data Normalization: Database Design 101
    Data normalization is a process where data attributes within a data model are organized to increase cohesion and to reduce and even eliminate data ...
  33. [33]
    Cognitive Maps, Mind Maps, and Concept Maps: Definitions - NN/G
    Jul 14, 2019 · Cognitive mapping, mind mapping, and concept mapping are three powerful visual-mapping strategies for organizing, communicating, and retaining knowledge.
  34. [34]
    JSimE 1/1 - Information and Process Modeling for Simulation – Part I
    Rating 92% (28) · Free · EducationalThis tutorial presents a general Object Event Modeling (OEM) approach for Discrete Event Simulation modeling using UML class diagrams and BPMN-based process ...Model-Driven Engineering · Process Modeling With Bpmn... · Making A Conceptual Process...
  35. [35]
    Development and analyses of stakeholder driven conceptual ... - NIH
    Iterations of the conceptual model process can also be a useful tool to ensure continuous and increased stakeholder engagement, which can build upon initial ...
  36. [36]
    ISO 10303-11:2004 - The EXPRESS language reference manual
    ISO 10303 specifies a language by which aspects of product data can be defined. The language is called EXPRESS.
  37. [37]
    [PDF] an overview - Object-Role Modeling
    This paper provides an overview of Object-Role Modeling (ORM), a fact-oriented method for performing information analysis at the conceptual level. The version ...
  38. [38]
    ISO/IEC 11179-1:2023 - Information technology
    In stockThis document provides the means for understanding and associating the individual parts of ISO/IEC 11179 and is the foundation for a conceptual understanding ...
  39. [39]
    ISO/IEC 11179-3:2023 - Information technology
    In stockThis document specifies the information to be recorded in a metadata registry in the form of a conceptual data model.
  40. [40]
    ISO/IEC 19763-1:2023 - Information technology — Metamodel ...
    This document provides an overview of the whole ISO/IEC 19763 series. This overview includes the purpose, the underlying concepts, the overall architecture and ...
  41. [41]
    [PDF] Data base directions information resource management
    resource management that would allow them to focus on the key issues. The definition that evolved was: Information Resource Management (IRM) is whatever.
  42. [42]
    RDF Schema 1.1 - W3C
    Feb 25, 2014 · Abstract. RDF Schema provides a data-modelling vocabulary for RDF data. RDF Schema is an extension of the basic RDF vocabulary.
  43. [43]
    The Fast Health Interoperability Resources (FHIR) Standard - NIH
    In 2011, the proponent of Australian Health Level Seven (HL7) standards, Grahame Grieve, proposed an interoperability approach called Resources for Healthcare ( ...
  44. [44]
    Semantic Interoperability on Blockchain by Generating Smart ...
    Jul 1, 2025 · We propose the encoding of smart contract logic using a high-level semantic Knowledge Graph (KG), which uses concepts and relations from a domain standard.
  45. [45]
    ACORD Data Standards
    We offer many Standards, as well as implementation guides and construction tools, to ACORD members. Once you join the proper ACORD membership or participation ...Life & Annuity · Multi-Functional Standards · Reinsurance & Large... · Asia-Pacific
  46. [46]
    The DMTF Common Information Model Achieves 10 Years as an ...
    Dec 3, 2007 · Initially developed in 1997 as a conceptual model to describe the components of managed computing and networking environments, CIM (pronounced ...Missing: DTMF | Show results with:DTMF
  47. [47]
    About ACORD
    Since 1970, ACORD has been an industry leader in identifying ways to help its members make improvements across the insurance value chain. Implementing ACORD ...Contact Us · ACORD News · 50 Years of ACORD · GovernanceMissing: history founding
  48. [48]
    About ISO 20022 | ISO20022
    ISO 20022 is a multi-part international standard, a single standardization approach for financial initiatives, and a common platform for message development.
  49. [49]
    ISO 20022-1:2013 - Financial services
    In stockThis standard defines a metamodel that underpins the creation and maintenance of message standards used across the financial services industry.
  50. [50]
    Overview of SNOMED CT - National Library of Medicine
    Oct 14, 2016 · IHTSDO. SNOMED CT was acquired in April 2007 by the International Health Terminology Standards Organisation (IHTSDO). The IHTSDO purchased the ...
  51. [51]
    SNOMED International: Home
    A library offering access to a wide range of official SNOMED CT materials including specifications and guides. Learn more. SNOMED International determines ...What is SNOMED CT · Use SNOMED CT · Our Members · Get SNOMED CT<|separator|>
  52. [52]
    Data privacy and security in EU digital health
    Oct 31, 2024 · This policy brief examines the pivotal issue of data privacy and security within the European Union (EU)'s digital health sector.Eu Context · 2. Data Minimisation And... · Need For Eu Level...
  53. [53]
    Data protection - European Commission
    Find out more about the rules for the protection of personal data inside and outside the EU, including the GDPR.
  54. [54]
    Data Modeling Explained: Conceptual, Physical, Logical - Couchbase
    Oct 7, 2022 · Data modeling, a process that supports efficient database design and management, involves three stages: conceptual, logical, and physical.
  55. [55]
    Logical vs Physical Data Model - Difference in Data Modeling - AWS
    The logical data model is a more refined version of the conceptual model. ... The physical data model further refines the logical data model for database design.Representation: logical data... · How to design: logical data...
  56. [56]
    What Is Data Modeling? | IBM
    Data modeling is the process of creating a visual representation of an information system to communicate connections between data points and structures.
  57. [57]
    Conceptual, Logical and Physical Data Model - Visual Paradigm
    Conceptual, logical and physical model are three different ways of modeling data in a domain. In this page you will learn what they are and how to transit ...
  58. [58]
    Database Normalization: 1NF, 2NF, 3NF & BCNF Examples
    Jul 27, 2025 · Master database normalization to minimize data redundancy and enhance integrity. Explore 1NF, 2NF, 3NF, and BCNF through practical examples ...
  59. [59]
    Normal Forms in DBMS - GeeksforGeeks
    Sep 20, 2025 · Each normal form - 1NF, 2NF, 3NF, BCNF, 4NF, 5NF - is stricter than the previous one: meeting a higher normal form implies the lower ones are ...
  60. [60]
    Extracting entity-relationship diagram from a table-based legacy ...
    Reverse engineering analyzes the implementation of a legacy system, and then abstracts such information into high-level design representations in order to ...
  61. [61]
    Legacy and Future of Data Reverse Engineering - IEEE Xplore
    Data(base) reverse engineering is the process through which the missing technical and/or semantic schemas of a database (or, equivalently, of a set of file.Missing: existing | Show results with:existing
  62. [62]
    [PDF] A Model-driven Reverse Engineering Approach for Legacy ...
    First, the reverse engineering stage aims to extract knowledge defined in low abstraction rep- resentations through introspection and analysis of the initial ...
  63. [63]
    Industry-Leading Data Modeling Tool | erwin, Inc. - Quest Software
    Automated data model & database schema generation. Automatically generate data models and database designs to increase efficiency and reduce errors.Erwin Data Modeler · Request Pricing · Learn More
  64. [64]
    Forward Engineering/Schema Generation - erwin
    You use the Schema Generation dialog to forward engineer a model and generate the schema. The schema that you generate includes all options that are supported ...
  65. [65]
    [PDF] CA ERwin® Data Modeler - Broadcom support portal
    Use the Model Type Indicator to switch the Physical model. 3. Click Forward Engineer Schema Generation on the Tools menu. The Schema Generation dialog opens.
  66. [66]
    TOGAF content Metamodel - The Open Group Publications Catalog
    No information is available for this page. · Learn why
  67. [67]
    A Semantic Model for Interchangeable Microservices in Cloud ...
    Jan 18, 2021 · The goal of the present paper is therefore twofold: (i) offering a new model, which allows an easier understanding of microservices within adaptive fog ...
  68. [68]
    Risk compliance and master data management in banking – A novel ...
    Jun 3, 2022 · We propose a novel, six-phase action plan that will allow banks to ensure compliance with BCBS 239 and, consequently, ensure efficient and effective risk data ...
  69. [69]
    [PDF] MDA Success Story ePEP successful with Model Driven Architecture
    15% increase in development productivity in first year. ROI in less than 12 months. Expected total productivity increase of 30% in second year, compared to a ...
  70. [70]
    Challenges of Big Data Analysis - PMC - PubMed Central
    On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability ...
  71. [71]
    (PDF) 25 Challenges of Semantic Process Modeling - ResearchGate
    More specifically, we discuss particular use cases of semantic process modeling to identify 25 challenges. For each challenge, we identify prior research and ...
  72. [72]
    10 Ways Data Modelling can go Wrong - datapro.news
    Oct 16, 2024 · Common data modeling mistakes include starting too late, insufficient stakeholder involvement, lack of clarity, inconsistent naming, and mixing ...
  73. [73]
    Heterogeneous data integration: Challenges and opportunities
    This review article aims to provide an overview of heterogeneous data integration research focusing on methodology and approaches.
  74. [74]
    Data Models and GDPR Compliance - Lonti
    Nov 20, 2023 · Discover how data models play a crucial role in ensuring GDPR compliance and data privacy. Learn about the principles of GDPR and how data ...The Essence Of Gdpr In Data... · Data Models: The Blueprint... · Tools And Techniques For...
  75. [75]
    IBM Knowledge Catalog
    IBM Knowledge Catalog delivers automated enrichment of data assets with business metadata to align company policies and vocabularies to data in support of AI, ...
  76. [76]
    Knowledge Graphs for Multi-modal Learning: Survey and Perspective
    This paper provides an overview of KG-driven multi-modal learning, tracing the field's evolution from past achievements through current trends to future ...
  77. [77]
  78. [78]
    3 shifts redefining the Knowledge Graph and its AI future
    Aug 18, 2025 · From May 2024 to May 2025, the Knowledge Graph expanded at a steady 2.79% – healthy, incremental growth by our tracking. Then, in June, ...Missing: advancements | Show results with:advancements
  79. [79]
    Supply Chain Object Discovery with Semantic-enhanced Blockchain
    This paper introduces a semantic-enhanced blockchain platform allowing a flexible object discovery. It is based on validation by consensus of smart contracts ...
  80. [80]
    Blockchain consensus algorithm for supply chain information ...
    Sep 30, 2025 · Blockchain technology can be introduced into supply chain information sharing to ensure the immutability and transparency of data. This article ...
  81. [81]
    Gartner Survey Finds All IT Work Will Involve AI by 2030
    Oct 20, 2025 · By 2030, CIOs expect that 0% of IT work will be done by humans without AI, 75% will be done by humans augmented with AI, and 25% will be done ...Missing: hybrid adoption 2024