Database normalization
Database normalization is a design technique for relational databases that organizes data into tables to reduce redundancy and avoid data anomalies during insertion, updates, and deletions, achieved by adhering to a hierarchy of normal forms that enforce rules on dependencies between attributes.[1] Introduced by Edgar F. Codd in his foundational 1970 paper on the relational model, normalization builds on the concept of relations as mathematical sets to ensure data independence and structural integrity.[2] The process begins with first normal form (1NF), which requires all attributes to contain atomic values with no repeating groups or multivalued fields, allowing relations to be represented as simple, two-dimensional arrays without embedded lists or arrays.[2] Building on 1NF, second normal form (2NF) eliminates partial dependencies by ensuring every non-prime attribute is fully functionally dependent on the entire candidate key, thus preventing subsets of composite keys from determining other attributes independently.[1] Third normal form (3NF) further refines this by removing transitive dependencies, where non-prime attributes depend only directly on candidate keys and not on other non-prime attributes, promoting a clearer separation of concerns in data storage.[1] Higher normal forms, such as Boyce-Codd normal form (BCNF)—a stricter variant of 3NF—address additional dependencies involving superkeys to further enhance anomaly prevention, though they may sometimes lead to increased query complexity due to more joins.[3] The primary goals of normalization include freeing relations from undesirable insertion, update, and deletion dependencies; reducing the need for database restructuring as new data types emerge; and making the schema more intuitive and neutral to evolving query patterns.[1] While full normalization to the highest forms optimizes integrity and storage efficiency, practical designs often balance it with denormalization for performance in read-heavy applications.[3]Overview
Definition and purpose
Database normalization is the systematic process of organizing the fields and tables of a relational database to minimize redundancy and maintain data dependencies, thereby ensuring that data is stored efficiently and consistently. Introduced as part of the relational model, normalization structures data into progressive levels known as normal forms, each building on the previous to eliminate specific types of redundancies and dependencies. This approach protects users from the internal complexities of data organization while facilitating reliable data management operations.[2][1] The primary purposes of normalization include reducing data redundancy, which prevents the storage of the same information in multiple locations, and avoiding anomalies that arise during data manipulation. For instance, an update anomaly occurs when a single fact, such as a change in an employee's department, must be modified in multiple rows to maintain consistency, risking incomplete updates and inconsistencies if not all instances are addressed. Similarly, insertion anomalies prevent recording new facts without extraneous data, while deletion anomalies force the loss of unrelated information when removing a record. By addressing these issues, normalization enhances data integrity and supports more efficient querying by promoting a logical, non-redundant structure.[1] At its core, normalization is grounded in Edgar F. Codd's relational model, which emphasizes data independence and the use of relations—mathematical sets of tuples—to represent data without exposing users to storage details. The process relies on functional dependencies, where the value of one attribute uniquely determines another, to decompose relations into higher normal forms that free the database from undesirable insertion, update, and deletion dependencies. This not only minimizes the need for restructuring as new data types emerge but also makes the database more informative and adaptable for long-term application use.[2][1]Historical development
Database normalization originated with the introduction of the relational model by Edgar F. Codd in his seminal 1970 paper, where he proposed the concept to ensure data integrity and eliminate redundancy in large shared data banks. Codd defined the first normal form (1NF) as a foundational requirement, mandating that relations consist of atomic values and no repeating groups. This marked the beginning of normalization as a systematic approach to database design within the relational framework.[4] In 1971, Codd expanded on these ideas in his paper "Further Normalization of the Data Base Relational Model," formalizing first normal form (1NF) more rigorously, introducing second normal form (2NF) to address partial dependencies, and defining third normal form (3NF) to eliminate transitive dependencies. These developments provided a structured progression for refining relational schemas to minimize anomalies. Later that decade, Raymond F. Boyce and Edgar F. Codd proposed Boyce-Codd normal form (BCNF) in 1974, strengthening 3NF by requiring that every determinant be a candidate key, thus resolving certain dependency preservation issues. Ronald Fagin advanced the theory further in 1977 with fourth normal form (4NF), targeting multivalued dependencies to prevent redundancy in relations with independent multi-valued attributes. Fagin also introduced fifth normal form (5NF), also known as project-join normal form, in 1979 to handle join dependencies that could lead to spurious tuples upon decomposition and recombination.[5][6][7] The evolution of normalization theory transitioned from academic foundations to practical implementation in relational database management systems during the 1980s. It profoundly influenced the design of SQL, the standard query language for relational databases, which was first formalized by the American National Standards Institute (ANSI) in 1986 as SQL-86. This standardization incorporated normalization principles to promote efficient, anomaly-free data storage and retrieval in commercial systems. Key contributors like Ramez Elmasri and Shamkant B. Navathe further disseminated these concepts through their influential textbook "Fundamentals of Database Systems," first published in 1989, which synthesized normalization for educational and professional use. In 2003, C. J. Date, Hugh Darwen, and Nikos Lorentzos extended the hierarchy with sixth normal form (6NF) in the context of temporal databases, emphasizing full temporal support by eliminating all join dependencies except those implied by keys.[8]Fundamentals
Relational model essentials
The relational model organizes data into relations, which are finite sets of tuples drawn from the Cartesian product of predefined domains. Each relation corresponds to a table in practical implementations, where tuples represent rows and attributes represent columns, with each attribute associated with a specific domain defining its allowable atomic values. This structure ensures that all entries in a relation are indivisible scalars, such as integers or strings, without embedded structures like lists or arrays.[4][9] Tuples within a relation must be unique, enforced by the set-theoretic nature of relations, which prohibits duplicates and imposes no inherent order on rows or columns. Each tuple requires a unique identifier, typically through a designated set of attributes, to distinguish it from others and support data retrieval and integrity. Relations are thus unordered collections, emphasizing mathematical rigor over sequential or hierarchical representations.[4][10] A relational schema specifies the structure of a relation, including its name, the attributes, and their domains, serving as a blueprint for the database design. In contrast, a relation instance represents the actual data populating the schema at a given time, which can change without altering the underlying schema. Constraints play a crucial role in maintaining data quality: uniqueness constraints ensure no duplicate tuples and support primary keys for identification, while referential integrity constraints require that values in one relation match primary keys in another, preventing orphaned references.[9][11] Viewing relations as mathematical sets is essential for normalization, as it precludes non-relational designs such as repeating groups—multi-valued attributes within a single tuple—that could introduce redundancy and anomalies. This foundational adherence to set theory provides the clean, atomic basis from which normalization processes eliminate data irregularities.[4]Dependencies and keys
In database normalization, functional dependencies (FDs) represent constraints that capture the semantic relationships among attributes in a relation, ensuring data integrity by specifying how values in one set of attributes determine values in another. Formally, an FD denoted as X \to Y holds in a relation R if, for any two tuples in R that have the same values for attributes in X, they must also have the same values for attributes in Y; here, X is the determinant (or left side), and Y is the dependent (or right side).[1] For example, in an employee relation, the FD {EmployeeID} \to {Name, Department} means that each employee ID uniquely determines the employee's name and department, preventing inconsistencies like multiple names for the same ID.[1] FDs are classified into types based on their structure and implications. A full functional dependency occurs when Y is entirely determined by X without reliance on any proper subset of a composite determinant, whereas a partial dependency arises when Y depends on only part of a composite key, such as {EmployeeID, ProjectID} \to {Task} where {EmployeeID} alone might suffice for some attributes.[1] Transitive dependencies describe indirect determinations, where X \to Z holds because X \to Y and Y \to Z for some intermediate Y; for instance, if {EmployeeID} \to {Department} and {Department} \to {DepartmentLocation}, then {EmployeeID} transitively determines {DepartmentLocation}.[1] These classifications help identify redundancies that normalization aims to eliminate. Keys in the relational model enforce uniqueness and referential integrity, serving as the foundation for identifying tuples without duplication. A superkey is any set of one or more attributes that uniquely identifies each tuple in a relation, such as {EmployeeID, Name} in an employee table where the combination ensures no duplicates.[2] A candidate key is a minimal superkey, meaning it uniquely identifies tuples and no proper subset of its attributes does the same; for example, {EmployeeID} might be a candidate key if it alone suffices, while {EmployeeID, Name} is a non-minimal superkey.[2] The primary key is a selected candidate key designated for indexing and uniqueness enforcement in a relation, and a foreign key is an attribute (or set) in one relation that references the primary key in another, enabling links between tables like a DepartmentID in an employee relation pointing to a departments table.[2] Beyond FDs, other dependencies address more complex inter-attribute relationships. A multivalued dependency (MVD), denoted X \to\to Y, holds if the set of values for Y associated with a given X is independent of other attributes in the relation; for example, in a relation with {Author} \to\to {Book} and {Author} \to\to {Article}, an author's books do not affect their articles.[6] Join dependencies generalize this further, where a join dependency *(R_1, R_2, \dots, R_k) on a relation R means R equals the natural join of its projections onto the subrelations R_1 through R_k, capturing when a relation can be decomposed without information loss.[12] Armstrong's axioms provide a sound and complete set of inference rules for deriving all functional dependencies implied by a given set F of FDs, enabling systematic analysis of dependency closures. The axioms include: reflexivity (if Y \subseteq X, then X \to Y); augmentation (if X \to Y, then for any Z, XZ \to YZ); and transitivity (if X \to Y and Y \to Z, then X \to Z).[13] Applying these rules computes the closure F^+, the complete set of FDs logically following from F. An Armstrong relation for F is a minimal relation that satisfies exactly the FDs in F^+ and no others, serving as a tool to visualize and derive all implied dependencies without extraneous ones.[14]Normal forms
First normal form (1NF)
First normal form (1NF) requires that every attribute in a relational table contains only atomic values, meaning indivisible, simple elements such as numbers or character strings, with no repeating groups, arrays, or nested structures within any cell. This foundational normalization level ensures that data is stored in a tabular format without multivalued dependencies embedded in individual entries, allowing for consistent querying and manipulation. Edgar F. Codd introduced this concept as part of the relational model, where domains are pools of atomic values to prevent complexity from nonsimple components.[4] The key requirements for a table to be in 1NF include: each column containing only single, atomic values of the same type; every row being unique to avoid duplicates; and the physical ordering of rows or columns being immaterial to the relation's logical content. Codd emphasized that relations are sets of distinct tuples, where duplicate rows are prohibited, and column order serves only for attribute identification without semantic implications. These properties guarantee that the relation behaves as a true mathematical set, supporting operations like projection and join without ambiguity.[15] Achieving 1NF involves identifying and decomposing multi-valued attributes or repeating groups by expanding the primary key into separate relations, thereby flattening the structure into atomic components. For instance, consider an unnormalized employee table with repeating groups in job history and children:| Man# | Name | Birthdate | Job History | Children |
|---|---|---|---|---|
| E1 | Jones | 1920-01-15 | (1971, Mgr, 50k); (1968, Eng, 40k) | (Alice, 1945); (Bob, 1948) |
| E2 | Blake | 1935-06-22 | (1972, Eng, 45k) | (Carol, 1950) |
| Man# | Name | Birthdate |
|---|---|---|
| E1 | Jones | 1920-01-15 |
| E2 | Blake | 1935-06-22 |
| Man# | Job Date | Title | Salary |
|---|---|---|---|
| E1 | 1971 | Mgr | 50k |
| E1 | 1968 | Eng | 40k |
| E2 | 1972 | Eng | 45k |
Second normal form (2NF)
Second normal form (2NF) requires a relation to be in first normal form (1NF) and to eliminate partial functional dependencies, ensuring that every non-prime attribute is fully dependent on the entire candidate key rather than on any proper subset of it. This form addresses redundancy arising from composite keys, where a non-prime attribute depends only on part of the key, leading to update anomalies such as inconsistent data when modifying values tied to key subsets. Introduced by E.F. Codd, 2NF applies specifically to relations with composite candidate keys; relations with single-attribute keys are inherently in 2NF if they satisfy 1NF.[1] The requirements for 2NF stipulate that no non-prime attribute (one not part of any candidate key) can be functionally dependent on a proper subset of a candidate key, while allowing full dependence on the whole key. For instance, if a candidate key consists of attributes {A, B}, a non-prime attribute C must satisfy C ← {A, B} but not C ← {A} alone or C ← {B} alone. This prevents scenarios where updating a value dependent on only one key component requires changes across multiple rows, risking inconsistency. Prime attributes, those included in at least one candidate key, are exempt from this full-dependence rule.[1][16] To achieve 2NF, the normalization process involves identifying partial functional dependencies through analysis of the relation's functional dependencies and decomposing the relation into two or more smaller relations. Each new relation should contain either the full candidate key or the subset causing the partial dependency, with non-prime attributes redistributed accordingly to eliminate the anomaly while preserving data integrity and query capabilities via joins. This decomposition maintains all original dependencies but distributes them across relations without loss.[1][17] A classic example from Codd illustrates this: consider a relation T with attributes Supplier Number (S#), Part Number (P#), and Supplier City (SC), where {S#, P#} is the candidate key and SC functionally depends only on S# (a partial dependency), violating 2NF. The relation can be decomposed into T1(S#, P#) for shipments and T2(S#, SC) for supplier details, ensuring full dependence in each: now, SC depends entirely on S# in T2, and no partial issues remain in T1. This split reduces redundancy, as supplier city updates affect only T2 rows.[1]| Original Relation T | Decomposed Relations | |||||
|---|---|---|---|---|---|---|
| S# | P# | SC | T1 | S# | P# | |
| S1 | P1 | CityA | S1 | P1 | ||
| S1 | P2 | CityA | T2 | S# | SC | |
| S2 | P1 | CityB | S1 | CityA | ||
| S2 | CityB |
Third normal form (3NF)
Third normal form (3NF) is a database normalization level that builds upon second normal form (2NF) by eliminating transitive dependencies among non-prime attributes. A relation is in 3NF if it is already in 2NF and every non-prime attribute is non-transitively dependent on each candidate key, meaning no non-prime attribute depends on another non-prime attribute.[1] This form ensures that all non-prime attributes directly reflect properties of the candidate keys without intermediate dependencies, reducing redundancy and potential anomalies in data updates, insertions, or deletions.[1] The primary requirement for 3NF is the removal of functional dependencies (FDs) of the form A \to B \to C, where A is a candidate key, B is a non-prime attribute, and C is another non-prime attribute, such that C is transitively dependent on A through B.[1] In such cases, the dependency A \to C holds indirectly, leading to redundancy if B and C are stored repeatedly for each instance of A. To achieve 3NF, the relation must satisfy that for every non-trivial FD X \to Y in the relation, either X is a superkey or Y is a prime attribute (part of a candidate key).[18] This stricter condition than 2NF addresses issues in relations with single-attribute keys or where partial dependencies have already been resolved. The normalization process to 3NF involves decomposing the relation by projecting out the transitive dependencies into separate relations while preserving the original FDs. For instance, consider a relation Employee with attributes EmployeeID (candidate key), Department, and Location, where EmployeeID → Department and Department → Location. This creates a transitive dependency EmployeeID → Department → Location. To normalize, decompose into two relations: EmployeeDepartment (EmployeeID, Department) and DepartmentLocation (Department, Location). The join of these relations reconstructs the original without redundancy.[1] Compared to 2NF, which eliminates partial dependencies in composite-key relations by ensuring full dependence on the entire key, 3NF is stricter as it applies to all relations, including those with single-attribute keys, by targeting inter-attribute dependencies among non-prime attributes.[1] This makes 3NF essential for handling transitive chains that 2NF overlooks, providing a more robust structure for data integrity.[18]Boyce–Codd normal form (BCNF)
Boyce–Codd normal form (BCNF) is a refinement of third normal form (3NF) in relational database normalization, introduced by Raymond F. Boyce and Edgar F. Codd in 1974 to further eliminate redundancy and dependency anomalies arising from functional dependencies. A relation schema R is in BCNF if, for every non-trivial functional dependency X \to A that holds in R, X is a superkey of R. This condition ensures that no attribute is determined by a non-key set of attributes, thereby preventing update anomalies that could occur even in 3NF relations.[19] Unlike 3NF, which permits a functional dependency X \to A where X is not a superkey as long as A is a prime attribute (part of some candidate key), BCNF imposes the stricter requirement that every determinant must be a candidate key. This addresses specific cases in 3NF where transitive dependencies or overlapping candidate keys allow non-key determinants, leading to potential redundancy. For instance, if a relation has multiple candidate keys and a dependency where the left side is part of one key but not a superkey overall, BCNF violation occurs, whereas 3NF might accept it.[20][21] The process to normalize a relation to BCNF involves identifying a violating functional dependency X \to A where X is not a superkey, then decomposing R into two relations: one consisting of X \cup A (or more precisely, X union the closure of X) and the other containing the remaining attributes union X. This decomposition is applied recursively to each resulting relation until all are in BCNF. The algorithm guarantees a lossless join decomposition, ensuring that the natural join of the decomposed relations reconstructs the original relation without introducing spurious tuples or losing information.[22][19] Consider a relation TEACH with attributes {student, course, instructor} and functional dependencies {student, course} → instructor (the primary key dependency) and instructor → course. Here, {student, course} is the candidate key, placing the relation in 3NF, but instructor → course violates BCNF since instructor is not a superkey. Decomposing yields TEACH1 {instructor, course} and TEACH2 {student, instructor}, both now in BCNF with candidate keys {instructor, course} and {student, instructor}, respectively. This eliminates redundancy, such as repeating the course assignment for every student of the same instructor, while preserving all data through lossless join.[19][23]Fourth normal form (4NF)
Fourth normal form (4NF) is a level of database normalization that eliminates redundancy arising from multivalued dependencies (MVDs) in relations already in Boyce–Codd normal form (BCNF). Introduced by Ronald Fagin in 1977, 4NF requires that a relation schema R with respect to a set of dependencies D is in 4NF if, for every non-trivial MVD X →→ Y in D+, either X is a superkey for R or Y ⊆ X ∪ {attributes dependent on X via functional dependencies}. A non-trivial MVD is one where Y is neither a subset of X nor equal to R - X. This form ensures that independent multi-valued facts associated with a key are separated to prevent spurious tuples and update anomalies.[6] Multivalued dependencies capture situations where attributes are independent given a determinant, such as when multiple values of one attribute pair independently with multiple values of another. For instance, if X →→ Y holds, then for any two tuples t1 and t2 agreeing on X, there exist tuples t3 and t4 in the relation such that t3 combines t1's X∪Y values with t2's remaining attributes, and t4 does the reverse. Every functional dependency (FD) implies an MVD, but not conversely, as an MVD X →→ Y also implies the FDs X → Y and X → (R - X - Y). Thus, achieving 4NF presupposes BCNF compliance, as violations of BCNF (FD-based) would also violate 4NF, but 4NF addresses additional redundancies from MVDs not reducible to FDs.[6][24] To achieve 4NF, decompose a relation violating it by identifying a non-trivial MVD X →→ Y where X is not a superkey, then split R into two projections: R1 = X ∪ Y and R2 = X ∪ (R - Y). This decomposition is lossless-join, preserving all information upon rejoining, though it may not preserve all dependencies. The process iterates until no violations remain. For example, consider a relation EmployeeProjectsSkills with attributes {Employee, Skill, Project}, where an employee can have multiple independent skills and projects (Employee →→ Skill and Employee →→ Project). This leads to redundancy: if Employee E1 has skills S1, S2 and projects P1, P2, the relation stores four tuples (E1,S1,P1), (E1,S1,P2), (E1,S2,P1), (E1,S2,P2), repeating skills and projects unnecessarily. Decomposing yields EmployeeSkills {Employee, Skill} and EmployeeProjects {Employee, Project}, eliminating the redundancy while allowing natural joins to recover the original data.[25][24] A similar issue arises in a Books relation with attributes {Book, Author, Category}, where a book has multiple independent authors and categories (Book →→ Author and Book →→ Category). The unnormalized table might include redundant combinations, such as repeating each author across all categories for a book. Decomposition into BooksAuthors {Book, Author} and BooksCategories {Book, Category} separates these independent MVDs, reducing storage and avoiding anomalies like inconsistent category updates for a book's authors. This approach highlights 4NF's extension beyond BCNF by isolating pairwise independent multi-valued attributes, ensuring the relation captures only essential, non-redundant associations.[26]Fifth normal form (5NF)
Fifth normal form (5NF), also known as projection-join normal form (PJ/NF), is defined for a relation schema such that every relation on that schema equals the natural join of its projections onto a set of attribute subsets, provided the allowed relational operators include projection.[7] This form assumes the relation is already in fourth normal form (4NF) and ensures that no non-trivial join dependency exists unless it is implied by the candidate keys of the relation.[7] In essence, 5NF prevents redundancy arising from complex interdependencies among attributes that cannot be captured by simpler functional or multivalued dependencies alone.[27] The primary requirement for 5NF is the absence of join dependencies that lead to spurious tuples when the relation is decomposed into three or more projections and then rejoined.[7] Such dependencies occur when attributes are cyclically related in a way that requires full decomposition to avoid anomalies, ensuring lossless recovery of the original data only through the complete set of projections.[27] This addresses cases beyond 4NF, where binary multivalued dependencies are resolved, by handling higher-arity interactions that could otherwise introduce update anomalies or redundant storage.[7] To normalize a relation to 5NF, identify any non-trivial join dependency not implied by the keys and decompose the relation into the minimal set of projections corresponding to the dependency's components, typically binary relations for practical schemas.[7] The process continues iteratively until the resulting relations satisfy the PJ/NF condition, meaning their natural join reconstructs the original relation without extraneous tuples.[27] This decomposition preserves all information while minimizing redundancy, though it may increase the number of relations and join operations in queries. A classic example illustrates 5NF in a supply chain scenario involving agents who represent companies that produce specific products.[27] Consider a ternary relation Agent-Company-Product where the business rule states: if an agent represents a company and that company produces a product, then the agent sells that product for the company. An unnormalized instance might include tuples like (Smith, Ford, car) and (Smith, GM, truck), but this form risks anomalies if, for instance, a new product is added without updating all agent-company pairs.[27] To achieve 5NF, decompose into three binary relations: Agent-Company (e.g., (Smith, Ford), (Smith, GM)), Company-Product (e.g., (Ford, car), (GM, truck)), and Agent-Product (e.g., (Smith, car), (Smith, truck)).[27] The natural join of these projections reconstructs the original ternary relation losslessly, as the join dependency ensures no spurious tuples are generated— for example, (Jones, Ford, car) would only appear if supported by all three components.[27] This full decomposition eliminates redundancy, such as avoiding repeated company-product pairs across agents, and prevents insertion or deletion anomalies that could arise in lower forms.[27] 5NF is equivalent to PJ/NF when projection is among the allowed operators, confirming its status as the highest standard normal form for addressing general join dependencies in relational schemas.[7]Sixth normal form (6NF)
Sixth normal form (6NF) represents the highest level of normalization in the relational model, particularly suited for temporal databases where data validity varies independently over time. A relation is in 6NF if it is in fifth normal form and cannot be further decomposed by any nontrivial join dependency, meaning every join dependency it satisfies is trivial—implied entirely by its candidate keys. This results in relations that are irreducible, typically consisting of a primary key and a single non-key attribute, often augmented with temporal components such as validity intervals to capture when a fact holds true. The form eliminates all redundancy arising from independent changes in attribute values over time, ensuring that each tuple asserts exactly one elementary fact without spanning multiple independent realities.[28] The requirements for 6NF extend those of 5NF by prohibiting any nontrivial join dependencies whatsoever, even those implied by keys, which forces a complete vertical decomposition into binary relations (one key and one value) that track temporal histories separately. In temporal contexts, this involves incorporating interval-valued attributes for stated validity periods, allowing attributes like status or location to evolve independently without contradicting or duplicating data across tuples. For instance, in a supplier database, separate relations might track a supplier's existence (S_DURING with {SNO, DURING}), name (S_NAME_DURING with {SNO, NAME, DURING}), and status (S_STATUS_DURING with {SNO, STATUS, DURING}), each recording changes only when that specific fact alters, preventing anomalies from concurrent updates. This decomposition ensures lossless joins via U_Joins (universal joins) that respect temporal constraints, maintaining data integrity in historical relvars.[28] The normalization process to achieve 6NF involves iteratively decomposing 5NF relations into these atomic components, often using system-versioned tables that automatically manage validity intervals for each fact. Consider an employee-role scenario: instead of a single relation holding employee ID, role, department, and validity dates—which might redundantly repeat stable values during role changes—the design splits into independent relations like EMP_ROLE (EMP_ID, ROLE, VALID_FROM, VALID_TO) and EMP_DEPT (EMP_ID, DEPT, VALID_FROM, VALID_TO), with each tuple capturing a single change event. This approach, while increasing the number of relations and join complexity for queries, is essential for temporal databases to avoid update anomalies in time-varying data. 6NF was formally proposed by C. J. Date, Hugh Darwen, and Nikos A. Lorentzos in their 2003 work on temporal data modeling, emphasizing its role in handling bitemporal (valid time and transaction time) requirements without redundancy.[28][29]Domain-key normal form (DKNF)
Domain-key normal form (DKNF) is a normalization level for relational database schemas that ensures all integrity constraints are logically implied by the definitions of domains and keys, providing a robust foundation for anomaly-free designs.[30] Proposed by Ronald Fagin in 1981, DKNF extends beyond dependency-based normal forms by focusing on primitive relational concepts—domains, which specify allowable values for attributes, and keys, which enforce uniqueness—rather than functional or multivalued dependencies.[30] This approach aims to eliminate insertion and deletion anomalies comprehensively, as a schema in DKNF is guaranteed to have none, and conversely, any anomaly-free schema satisfies DKNF.[30] A relation schema is in DKNF if every constraint on it is a logical consequence of its domain constraints and key constraints.[30] Domain constraints restrict attribute values, such as requiring an age attribute to be a positive integer greater than or equal to 0, while key constraints ensure that candidate keys uniquely identify tuples, preventing duplicates based on those attributes.[30] Requirements for DKNF include the absence of ad-hoc or business-specific rules that cannot be derived from these specifications; for instance, all integrity rules, like ensuring a salary is within a valid range, must stem directly from domain definitions rather than external assertions.[30] This eliminates the need for transitive dependencies or other non-key-derived restrictions, making the schema self-enforcing through its foundational elements. Achieving DKNF involves designing schemas where all constraints are captured by domains and keys from the outset. For example, in an employee relation with attributes for employee ID (a key), name, department, and salary, the domain for salary might be defined as positive real numbers up to a maximum value, ensuring no invalid entries without relying on additional functional dependencies.[30] Similarly, constraints like "age greater than 18 for certain roles" would be enforced via a domain subtype or check integrated into the attribute definition, avoiding any non-derivable rules. Fagin's formulation demonstrates that DKNF implies higher traditional normal forms, such as Boyce-Codd normal form, particularly when domains are unbounded, offering a practical target for designs that transcend dependency elimination alone.[30]Normalization process
Step-by-step normalization example
To illustrate the normalization process, consider a sample dataset from a bookstore management system tracking customer orders. The initial unnormalized relation, denoted as UNF (Unnormalized Form), contains repeating groups for multiple books per order, leading to redundancy and update anomalies such as inconsistent customer information across rows.[31] The unnormalized table is as follows:| OrderID | CustomerName | CustomerEmail | BookTitles | BookPrices | BookQuantities | OrderDate |
|---|---|---|---|---|---|---|
| 1001 | Alice Johnson | [email protected] | "DB Basics", "SQL Guide" | $50, $30 | 1, 2 | 2025-01-15 |
| 1002 | Bob Smith | [email protected] | "NoSQL Intro" | $40 | 1 | 2025-01-16 |
First Normal Form (1NF)
To achieve 1NF, eliminate repeating groups by creating separate rows for each book in an order and ensuring all attributes are atomic (single values). This removes multivalued attributes and introduces a composite primary key (OrderID, BookTitle) to uniquely identify rows, reducing insertion anomalies where adding a new book requires modifying existing order data. The resulting 1NF relation is:| OrderID | CustomerName | CustomerEmail | BookTitle | BookPrice | BookQuantity | OrderDate |
|---|---|---|---|---|---|---|
| 1001 | Alice Johnson | [email protected] | DB Basics | $50 | 1 | 2025-01-15 |
| 1001 | Alice Johnson | [email protected] | SQL Guide | $30 | 2 | 2025-01-15 |
| 1002 | Bob Smith | [email protected] | NoSQL Intro | $40 | 1 | 2025-01-16 |
Second Normal Form (2NF)
The 1NF relation is not in 2NF due to partial dependencies on the composite key. Decompose into three relations: one for customers (full dependency on CustomerName), one for order details (depending on OrderID), and one for order items (depending on OrderID and BookTitle). Primary keys are assigned accordingly, and foreign keys link the relations. This eliminates update anomalies, such as changing a customer's email requiring multiple row updates. The 2NF relations are: Customers:| CustomerName | CustomerEmail |
|---|---|
| Alice Johnson | [email protected] |
| Bob Smith | [email protected] |
| OrderID | CustomerName | OrderDate |
|---|---|---|
| 1001 | Alice Johnson | 2025-01-15 |
| 1002 | Bob Smith | 2025-01-16 |
| OrderID | BookTitle | BookPrice | BookQuantity |
|---|---|---|---|
| 1001 | DB Basics | $50 | 1 |
| 1001 | SQL Guide | $30 | 2 |
| 1002 | NoSQL Intro | $40 | 1 |
Third Normal Form (3NF)
The OrderItems relation violates 3NF due to transitive dependency: BookTitle → BookPrice (price depends on the book, not directly on the order). Decompose further by separating product details. Introduce a ProductID for uniqueness. This prevents anomalies like inconsistent pricing if a book's price changes. The 3NF relations are: Customers: (unchanged) Orders: (unchanged) Products:| ProductID | BookTitle | BookPrice |
|---|---|---|
| 1 | DB Basics | $50 |
| 2 | SQL Guide | $30 |
| 3 | NoSQL Intro | $40 |
| OrderID | ProductID | BookQuantity |
|---|---|---|
| 1001 | 1 | 1 |
| 1001 | 2 | 2 |
| 1002 | 3 | 1 |
Boyce–Codd Normal Form (BCNF)
Assume an extension where suppliers are added to products, with FD Supplier → ProductID (each supplier provides specific products, but not vice versa, violating BCNF in a combined relation). Decompose the Products relation if it were SuppliersProducts with key (ProductID, Supplier) but Supplier → ProductID. The BCNF adjustment yields: Suppliers:| SupplierID | SupplierName |
|---|---|
| 101 | TechBooks Inc. |
| 102 | DataPress |
| SupplierID | ProductID |
|---|---|
| 101 | 1 |
| 101 | 2 |
| 102 | 3 |
Fourth Normal Form (4NF)
To demonstrate 4NF, consider an extension for customer preferences where a customer has multiple hobbies and preferred book categories (multi-valued dependencies: CustomerID →→ Hobby, CustomerID →→ Category, independent). A non-4NF relation might combine them, causing redundancy. Decompose into independent relations: CustomerHobbies:| CustomerID | Hobby |
|---|---|
| C1 | Reading |
| C1 | Coding |
| C2 | Gaming |
| CustomerID | Category |
|---|---|
| C1 | Database |
| C1 | Programming |
| C2 | Fiction |