Unique key
A unique key, also known as a unique constraint, is a database rule that enforces uniqueness on the values within one or more columns of a table, preventing duplicate entries while typically permitting null values in those columns.[1][2] In relational database management systems (RDBMS) such as SQL Server, Oracle, PostgreSQL, and DB2, unique keys maintain data integrity by ensuring that no two rows share the same non-null value in the specified column or combination of columns.[3][4][5]
Unique keys differ from primary keys in that they allow null values—often only one null per column, depending on the database system—and a table can have multiple unique keys, whereas it typically has only one primary key.[1][6] They can be defined on a single column for simple uniqueness, such as ensuring email addresses in a user table are distinct, or on multiple columns to form a composite unique key, like guaranteeing a unique combination of first name, last name, and birthdate.[7][8] Implementation occurs during table creation or alteration using SQL statements like CREATE TABLE with UNIQUE clauses or ALTER TABLE ADD [CONSTRAINT](/page/Constraint), and violations trigger errors to block invalid inserts or updates.[4][5]
In broader contexts, such as Azure Cosmos DB, unique keys extend this concept to NoSQL environments by enforcing uniqueness within logical partitions, supporting scalable data models while preserving integrity across distributed systems.[9] Unique keys play a critical role in database design by supporting referential integrity indirectly—through foreign keys referencing them—and optimizing query performance via underlying indexes automatically created by most RDBMS.[6][3]
Introduction
Definition
A unique key is a database constraint in relational databases that enforces uniqueness across one or more columns in a table, ensuring that no two rows contain identical values in those columns.[6] This mechanism prevents duplicate entries while allowing the constrained columns to accept NULL values, with multiple NULLs permitted since NULL is treated as distinct from other values and from itself in standard SQL semantics.[3] Unlike primary keys, which prohibit NULLs entirely, unique keys provide a flexible way to maintain distinctness without mandating non-nullability.[3]
Unique keys contribute to data integrity by guaranteeing the uniqueness of non-primary identifiers, such as email addresses or usernames in a users table, where duplicates could otherwise lead to inconsistencies in data referencing or application logic. For instance, enforcing a unique key on an email column ensures that each user record has a distinct email value (excluding NULLs), supporting reliable lookups and preventing errors in user authentication systems.
The concept of unique keys originated in the early relational database models proposed by E.F. Codd in his 1970 paper, where they align with candidate keys—minimal sets of attributes that uniquely identify tuples and from which a primary key is selected.[10] These ideas were formalized in the SQL language through the ANSI SQL-86 standard, which introduced the UNIQUE constraint as part of its integrity mechanisms.[11]
Role in Data Integrity
Unique keys contribute to data integrity within relational databases by ensuring that no two rows contain identical values in the specified column or set of columns, thereby preventing the insertion of duplicate records that could lead to data inconsistencies, such as multiple entries representing the same real-world entity.[12] This mechanism upholds the uniqueness of data attributes, allowing databases to maintain accurate representations of entities without the risks associated with redundant or conflicting information.[13]
By providing reliable unique identifiers, unique keys indirectly support referential integrity in database relationships, as foreign keys in other tables can reference these constraints to establish valid links between data sets.[3] Additionally, they contribute to reducing data redundancy and mitigating update or delete anomalies, where changes to duplicated values might otherwise require modifications across multiple locations, potentially leading to errors or inconsistencies.[12]
Unique keys are particularly suited to business rules that demand uniqueness in non-mandatory fields, such as secondary phone numbers or email addresses, where null values are permissible but duplicates among non-null entries must be avoided.[12] This flexibility makes them ideal for scenarios like customer contact management, where optional yet unique attributes enhance data reliability without enforcing completeness on every record.[3]
In database design, the ability to apply multiple unique constraints to a single table enables the creation of flexible schemas for complex entities, accommodating various business requirements while centralizing integrity rules in the database structure for easier maintenance and enforcement.[12] This approach promotes scalable and robust data models that adapt to evolving needs without compromising overall integrity.[13]
Core Concepts in Relational Databases
SQL Specification
The unique key, formally known as a UNIQUE constraint in the SQL standard, is defined in ANSI SQL-92 (ISO/IEC 9075:1992) as a constraint that ensures all non-NULL values in a specified column or set of columns are distinct across all rows in a table.[14] This constraint applies to either a single column or multiple columns forming a composite unique key, enforcing data uniqueness at the table level without restricting the constraint to subsets of rows or specific conditions beyond non-duplication.[14] Subsequent standards, including SQL:2016 (ISO/IEC 9075-1:2016), maintain this core definition while incorporating refinements to SQL's foundational integrity mechanisms, such as options for NULL handling in unique constraints (NULLS DISTINCT or NULLS NOT DISTINCT), ensuring compatibility and evolution in relational database management.[15]
To declare a UNIQUE constraint, the SQL standard requires the use of the UNIQUE clause within a CREATE TABLE statement, either inline with a column definition (e.g., column_name datatype UNIQUE) or as a table-level constraint (e.g., CONSTRAINT constraint_name UNIQUE (column_name)).[14] Similarly, the ALTER TABLE statement supports adding a UNIQUE constraint post-table creation via ADD UNIQUE (column_name) or ADD CONSTRAINT constraint_name UNIQUE (column_name).[14] While the standard does not mandate the automatic creation of an index, most compliant implementations generate one to enforce the constraint efficiently, as uniqueness verification relies on rapid duplicate detection across the table.[1]
The scope of a UNIQUE constraint is confined to the entire table, with uniqueness enforced at the individual row level during insert, update, or delete operations that affect the constrained columns.[15] NULL values are explicitly ignored in uniqueness checks, meaning multiple rows may contain NULL in the constrained column(s) without violating the constraint, as NULL is not considered equal to another NULL or to any non-NULL value.[14] This treatment aligns with SQL's three-valued logic, where NULL represents an unknown value and does not participate in equality comparisons for constraint enforcement.[1]
SQL:1999 (ISO/IEC 9075-2:1999) enhanced UNIQUE constraints by introducing support for deferrable attributes, allowing the constraint to be specified as DEFERRABLE or NOT DEFERRABLE.[15] A DEFERRABLE UNIQUE constraint permits temporary violations during a transaction, with enforcement deferred until the transaction commits (e.g., via INITIALLY DEFERRED or SET CONSTRAINTS), providing greater flexibility for complex data manipulations without immediate failure.[1] This evolution from SQL-92's immediate enforcement model was carried forward and refined in later standards like SQL:2016, emphasizing transactional integrity while accommodating practical database operations.
Properties and Constraints
A unique key enforces uniqueness across the values in one or more specified columns within a single table, ensuring that no two rows share the same non-NULL combination of values in those columns.[4] This property aligns with the SQL standard, which permits multiple NULL values per constraint, as NULL is not considered equal to any value, including another NULL.[4] Unique keys can be composite, involving multiple columns, where uniqueness is evaluated based on the combined values of all included columns.[16]
In SQL's three-valued logic, NULL represents an unknown value, and comparisons involving NULL yield UNKNOWN rather than TRUE or FALSE; thus, NULL ≠ NULL, allowing multiple rows to contain NULL in the constrained columns without violating the uniqueness rule.[4] Per the SQL standard, this handling allows multiple NULL values, though some implementations like SQL Server allow only one. It is commonly adopted in systems like PostgreSQL and Oracle to support partial data scenarios.[4][16]
Database management systems enforce unique keys by rejecting INSERT or UPDATE operations that would introduce duplicate non-NULL values in the constrained columns, typically raising a constraint violation error.[4][16] This enforcement is immediate and relies on an underlying index structure, such as a B-tree index, to efficiently check for duplicates during data modifications.[4]
Unique keys have inherent limitations: they apply only within a single table and cannot enforce uniqueness across multiple tables, requiring separate mechanisms like foreign keys for inter-table relationships.[4] Additionally, unlike primary keys in some database systems, unique keys are not inherently clustered, meaning they do not automatically organize the table's physical storage order based on the key values.[17]
Comparisons with Other Keys
Versus Primary Key
A primary key is a specialized form of unique key that enforces both uniqueness and non-nullability on one or more columns, serving as the primary identifier for rows in a table, while a unique key enforces only uniqueness and permits null values (with some DBMS variations allowing multiple nulls).[4] Unlike unique keys, which can be defined on multiple columns or sets within the same table, a primary key is restricted to exactly one per table to maintain a single, authoritative reference point for entity identification.[18]
Selection of a primary key typically prioritizes a stable, efficient identifier such as an auto-incrementing integer column, which facilitates joins and queries across tables, whereas unique keys are applied to secondary attributes that must remain distinct but do not serve as the core row locator, such as an ISBN in a books table.[4] This distinction ensures that primary keys support referential integrity as targets for foreign keys, a role that unique keys can also fulfill.[4]
In terms of implementation implications, primary keys often default to a clustered index in many relational database management systems (DBMS), physically ordering the table data for optimized range queries and storage efficiency, while unique keys are usually implemented as non-clustered indexes, which maintain separate data structures for lookups without altering the table's physical order. This indexing behavior enhances primary keys' performance in join operations but may impose additional overhead for unique keys on large datasets.[18]
| Aspect | Primary Key | Unique Key |
|---|
| NULL Allowance | Not allowed; all values must be non-null.[4] | Allowed (multiple NULLs in most DBMS, per SQL standard).[4] |
| Multiplicity per Table | Only one allowed.[18] | Multiple allowed.[4] |
| Default Indexing | Often clustered by default (e.g., in SQL Server, MySQL InnoDB). | Typically non-clustered.[18] |
| Integrity Role | Enforces entity integrity as the main row identifier; target for foreign keys.[4] | Enforces attribute-level uniqueness; can be a target for foreign keys.[4] |
Versus Foreign Key
A unique key constraint ensures that the values in one or more columns within a single table are distinct, thereby preventing duplicate entries and maintaining data uniqueness at the table level.[12] In contrast, a foreign key constraint establishes and enforces referential integrity across multiple tables by requiring that values in a column (or set of columns) match an existing primary key or unique key value in a referenced table.[19] This distinction is fundamental: unique keys operate internally to a table to avoid redundancy in source data, such as ensuring no two employees share the same email address, while foreign keys manage inter-table relationships, for instance, by validating that an order's product identifier corresponds to an actual entry in the products table.[4]
Although foreign keys can reference either a primary key or a unique key in the parent table—allowing flexibility in scenarios where a non-primary column needs to serve as a reference point—unique keys themselves do not inherently point to or validate against other tables.[12] For example, in a database schema, a foreign key in an orders table might link to a unique key on the product_code column in the products table, ensuring that only valid, unique product codes are used without duplicating product data.[20] This overlap enables unique keys to support relationships but underscores their primary role in intra-table enforcement rather than cross-table validation.[4]
A common pitfall arises from conflating the two: applying a unique key where a foreign key is needed fails to check referential integrity, potentially resulting in orphaned records that reference non-existent data in related tables.[12] Conversely, using a foreign key without ensuring the referenced column has a unique or primary key constraint can lead to inconsistencies, as foreign keys require the target to enforce uniqueness to prevent multiple matches.[19] Proper application of both constraints together strengthens overall data integrity by combining internal uniqueness with reliable relational links.[4]
Practical Implementation
Syntax for Creation
In standard SQL, unique constraints can be defined during table creation using the CREATE [TABLE](/page/Table) statement, either inline with a column definition or as a table-level constraint for single or multiple columns.<citation_url>https://courses.cms.caltech.edu/cs123/sql99std/ansi-iso-9075-2-1999.pdf For a single column, the syntax is:
sql
CREATE [TABLE](/page/Table) example (
[id](/page/ID) INTEGER,
email VARCHAR(255) UNIQUE
);
CREATE [TABLE](/page/Table) example (
[id](/page/ID) INTEGER,
email VARCHAR(255) UNIQUE
);
For composite unique constraints spanning multiple columns, the table constraint form is used:
sql
CREATE TABLE example (
id INTEGER,
first_name VARCHAR(100),
last_name VARCHAR(100),
[UNIQUE](/page/Unique) (first_name, last_name)
);
```[](https://courses.cms.caltech.edu/cs123/sql99std/ansi-iso-9075-2-1999.pdf)
To add a unique constraint to an existing table, the `ALTER TABLE` statement employs the `ADD CONSTRAINT` clause, optionally naming the constraint for easier management:
```sql
ALTER TABLE example ADD [CONSTRAINT](/page/Constraint) uk_email [UNIQUE](/page/Unique) (email);
```[](https://courses.cms.caltech.edu/cs123/sql99std/ansi-iso-9075-2-1999.pdf)
Unique constraints can be modified or removed using `ALTER TABLE`. To drop a named unique constraint, the syntax is:
```sql
ALTER TABLE example DROP CONSTRAINT uk_email;
```[](https://courses.cms.caltech.edu/cs123/sql99std/ansi-iso-9075-2-1999.pdf)
In advanced SQL implementations supporting the standard's deferrable constraint features, unique constraints may be declared as deferrable, allowing violation checks to be postponed until the end of a transaction, with options such as `DEFERRABLE INITIALLY DEFERRED` or `DEFERRABLE INITIALLY IMMEDIATE` to control timing:
```sql
ALTER TABLE example ADD CONSTRAINT uk_email UNIQUE (email) DEFERRABLE INITIALLY DEFERRED;
```[](https://courses.cms.caltech.edu/cs123/sql99std/ansi-iso-9075-2-1999.pdf)
Violations of [unique](/page/Unique) constraints during insert or update operations trigger [standard](/page/Standard) SQL error handling via [SQLSTATE](/page/SQLSTATE) codes; for example, SQLSTATE '23505' indicates a [unique](/page/Unique) violation in systems like [PostgreSQL](/page/PostgreSQL) that adhere to this convention.[](https://www.postgresql.org/docs/current/errcodes-appendix.html)
While the core syntax for creating and managing [unique](/page/Unique) constraints remains consistent across database management systems compliant with the ISO/IEC 9075 SQL standard, variations exist in naming conventions for constraints and any vendor-specific extensions.
Unique constraints allow multiple NULL values, as NULL does not compare equal to itself or any other value under standard SQL semantics.[](https://courses.cms.caltech.edu/cs123/sql99std/ansi-iso-9075-2-1999.pdf)
### Examples Across DBMS
In [MySQL](/page/MySQL), a unique key can be defined directly within a `CREATE TABLE` [statement](/page/Statement) to enforce [uniqueness](/page/Uniqueness) on a column, such as an [email](/page/Email) field in a users [table](/page/Table). For instance, the following SQL creates a table with an [integer](/page/Integer) [primary key](/page/Primary_key) and a [unique](/page/Unique) [constraint](/page/Constraint) on the email column:
```sql
CREATE TABLE users (
id INT [PRIMARY KEY](/page/Primary_key),
email VARCHAR(255) [UNIQUE](/page/Unique)
);
CREATE TABLE example (
id INTEGER,
first_name VARCHAR(100),
last_name VARCHAR(100),
[UNIQUE](/page/Unique) (first_name, last_name)
);
```[](https://courses.cms.caltech.edu/cs123/sql99std/ansi-iso-9075-2-1999.pdf)
To add a unique constraint to an existing table, the `ALTER TABLE` statement employs the `ADD CONSTRAINT` clause, optionally naming the constraint for easier management:
```sql
ALTER TABLE example ADD [CONSTRAINT](/page/Constraint) uk_email [UNIQUE](/page/Unique) (email);
```[](https://courses.cms.caltech.edu/cs123/sql99std/ansi-iso-9075-2-1999.pdf)
Unique constraints can be modified or removed using `ALTER TABLE`. To drop a named unique constraint, the syntax is:
```sql
ALTER TABLE example DROP CONSTRAINT uk_email;
```[](https://courses.cms.caltech.edu/cs123/sql99std/ansi-iso-9075-2-1999.pdf)
In advanced SQL implementations supporting the standard's deferrable constraint features, unique constraints may be declared as deferrable, allowing violation checks to be postponed until the end of a transaction, with options such as `DEFERRABLE INITIALLY DEFERRED` or `DEFERRABLE INITIALLY IMMEDIATE` to control timing:
```sql
ALTER TABLE example ADD CONSTRAINT uk_email UNIQUE (email) DEFERRABLE INITIALLY DEFERRED;
```[](https://courses.cms.caltech.edu/cs123/sql99std/ansi-iso-9075-2-1999.pdf)
Violations of [unique](/page/Unique) constraints during insert or update operations trigger [standard](/page/Standard) SQL error handling via [SQLSTATE](/page/SQLSTATE) codes; for example, SQLSTATE '23505' indicates a [unique](/page/Unique) violation in systems like [PostgreSQL](/page/PostgreSQL) that adhere to this convention.[](https://www.postgresql.org/docs/current/errcodes-appendix.html)
While the core syntax for creating and managing [unique](/page/Unique) constraints remains consistent across database management systems compliant with the ISO/IEC 9075 SQL standard, variations exist in naming conventions for constraints and any vendor-specific extensions.
Unique constraints allow multiple NULL values, as NULL does not compare equal to itself or any other value under standard SQL semantics.[](https://courses.cms.caltech.edu/cs123/sql99std/ansi-iso-9075-2-1999.pdf)
### Examples Across DBMS
In [MySQL](/page/MySQL), a unique key can be defined directly within a `CREATE TABLE` [statement](/page/Statement) to enforce [uniqueness](/page/Uniqueness) on a column, such as an [email](/page/Email) field in a users [table](/page/Table). For instance, the following SQL creates a table with an [integer](/page/Integer) [primary key](/page/Primary_key) and a [unique](/page/Unique) [constraint](/page/Constraint) on the email column:
```sql
CREATE TABLE users (
id INT [PRIMARY KEY](/page/Primary_key),
email VARCHAR(255) [UNIQUE](/page/Unique)
);
This ensures that no two rows can have the same non-NULL email value.[21] MySQL unique constraints permit multiple NULL values in the constrained column, treating each NULL as distinct from others, which allows multiple rows with NULL emails without violation.
In PostgreSQL, unique constraints are often added to existing tables using the ALTER TABLE statement with a named constraint for better management and error reporting. An example adds a unique constraint on the ISBN column of a products table:
sql
ALTER TABLE products ADD CONSTRAINT unique_isbn [UNIQUE](/page/Unique) (isbn);
ALTER TABLE products ADD CONSTRAINT unique_isbn [UNIQUE](/page/Unique) (isbn);
This automatically creates a unique B-tree index to enforce the constraint.[22] PostgreSQL supports partial indexes for unique constraints, enabling uniqueness enforcement only on a subset of rows that meet a specified condition, such as active records, which optimizes storage and performance for selective data. For example:
sql
CREATE UNIQUE INDEX active_email_idx ON users (email) WHERE active = true;
CREATE UNIQUE INDEX active_email_idx ON users (email) WHERE active = true;
This indexes only rows where the active flag is true, allowing duplicate emails in inactive rows.[23]
In Microsoft SQL Server, unique constraints are implemented via unique indexes, which can be created directly to mimic constraint behavior. The syntax for a basic unique index on a column, such as a product code, is:
sql
CREATE UNIQUE INDEX IX_products_code ON products (code);
CREATE UNIQUE INDEX IX_products_code ON products (code);
This enforces uniqueness across the specified column.[24] Creating a unique constraint is equivalent, as SQL Server automatically generates a nonclustered unique index by default to support it, unless a clustered index is explicitly specified, providing efficient enforcement without altering the table's physical order.[25]
In Oracle Database, composite unique keys spanning multiple columns are typically added using the ALTER TABLE statement with an out-of-line constraint definition. For example, to ensure uniqueness across warehouse ID and name in a warehouses table:
sql
ALTER TABLE warehouses ADD CONSTRAINT wh_unq UNIQUE (warehouse_id, warehouse_name);
ALTER TABLE warehouses ADD CONSTRAINT wh_unq UNIQUE (warehouse_id, warehouse_name);
This prevents duplicate combinations in the specified columns while allowing NULLs in single-column cases.[26] Oracle supports using Flashback Query to investigate unique constraint violations by retrieving historical data states via SELECT ... AS OF with a timestamp or SCN, enabling diagnosis of erroneous transactions that caused duplicates without full recovery.[27]
Advanced Applications
Composite and Partial Unique Keys
Composite unique keys enforce uniqueness across a combination of multiple columns in a relational database table, ensuring that no two rows share the identical set of values in those specified columns, while allowing individual columns to contain duplicate values independently.[1] This approach is particularly useful for modeling compound identifiers where a single column alone cannot guarantee distinctness, such as in scenarios requiring the prevention of duplicate entries based on interrelated attributes.[3] For instance, in an order lines table, a composite unique key on (order_id, product_id) would prohibit multiple identical line items within the same order, thereby maintaining data integrity for business processes like inventory management.[1]
In SQL implementations, composite unique keys are typically defined using the UNIQUE constraint syntax applied to multiple columns, such as UNIQUE (column1, column2) during table creation or alteration.[3] Unlike single-column unique keys, these constraints treat null values permissively: a row with nulls in all composite key columns satisfies the constraint, but partial nulls may allow duplicates depending on the database system's handling.[1] Common limitations include restrictions on the number of columns (e.g., up to 32 in Oracle) and exclusions of certain data types like LOBs or user-defined objects, which can complicate schema design in complex environments.[1] Additionally, enforcing uniqueness over multiple columns can increase query complexity, as joins or filters must account for the full combination to avoid unintended duplicates.[3]
Partial unique keys extend the concept of uniqueness to a selective subset of rows, applying the constraint only where a specified condition holds true, which is a feature supported in certain database management systems like PostgreSQL through partial unique indexes.[23] In this mechanism, the index—and thus the uniqueness enforcement—is built solely over rows satisfying a predicate (e.g., WHERE status = 'active'), allowing duplicates in rows outside that subset without violating the constraint.[23] A practical use case arises in auditing tables, where a partial unique key on (user_id, timestamp) with WHERE event_type = 'login' ensures no duplicate successful logins per user at the same time, while permitting multiple failed attempts.[23]
This selective application optimizes storage by excluding irrelevant rows from the index but requires precise alignment between the predicate and querying conditions to leverage the index effectively.[23] Limitations include the DBMS-specific nature of partial unique keys, potential for overlooked duplicates if predicates evolve, and added complexity in maintaining consistency across queries that may not match the index's subset exactly.[23] Overall, both composite and partial unique keys enhance relational data modeling by addressing multifaceted uniqueness requirements, though they demand careful consideration to balance integrity with operational efficiency.[1][23]
In major relational database management systems (DBMS) such as PostgreSQL, SQL Server, MySQL, and Oracle, defining a unique key constraint automatically generates a unique index on the constrained column or columns to enforce uniqueness and facilitate rapid data retrieval.[4][24][16] This index ensures that no duplicate values are inserted or updated, while also serving as an access path for queries, thereby combining data integrity enforcement with query acceleration.
Unique indexes typically employ B-tree structures, which support efficient ordered access and range scans, leading to faster execution of SELECT statements and JOIN operations on the indexed columns. However, this integration introduces overhead during INSERT and UPDATE operations, as the DBMS must maintain the index by inserting, updating, or deleting index entries, though this cost is typically negligible.[28] In read-dominant environments, the benefits often outweigh these costs, with query performance gains from index usage reducing execution times significantly—for instance, merge joins on unique indexes can lower CPU utilization compared to non-unique counterparts.[28]
To optimize performance, unique keys are generally implemented as non-clustered indexes, which avoid the row-ordering overhead associated with clustered primary keys and reduce page splits during inserts. Administrators should monitor index fragmentation using tools like SQL Server's sys.dm_db_index_physical_stats or PostgreSQL's pgstattuple extension, as fragmentation above 30% can degrade scan performance and necessitate periodic reorganization or rebuilding.
In high-write environments, unique indexes on monotonically increasing columns—such as timestamps or sequences—can lead to contention at the rightmost index leaf, causing lock waits and reduced throughput in concurrent scenarios. For such cases, alternatives like filtered indexes in SQL Server allow uniqueness enforcement only on a subset of rows (e.g., WHERE active = 1), minimizing maintenance costs and storage by indexing fewer entries while still supporting targeted queries.[29] This approach can reduce index size in sparse data scenarios, balancing integrity with scalability.[29]