Fact-checked by Grok 2 weeks ago

SQL

Structured Query Language (SQL) is a domain-specific, language designed for managing and manipulating data in management systems (RDBMS), such as , , , , and SQL Server. It enables users to perform operations like querying, inserting, updating, and deleting data through standardized commands, facilitating efficient interaction with structured data stored in tables. Originally developed to implement Edgar F. Codd's , SQL has become the foundational language for database operations across industries, supporting tasks from simple data retrieval to complex analytics and . SQL's origins trace back to the early 1970s at , where researchers and created it as (Structured English QUEry Language) to query relational databases based on Codd's 1970 paper on the relational . Due to issues, it was renamed SQL in 1974 and first implemented in 1979 with 's System R prototype. Early adoption grew through relational databases like in 1979 and 's DB2 in 1983, establishing SQL as a critical tool for in and scientific applications. Standardization began in 1986 when the (ANSI) published (ANSI X3.135-1986), the first formal specification, which was quickly adopted internationally by the (ISO) as ISO/IEC 9075:1987. Subsequent revisions include SQL-89 (minor updates for integrity constraints), the influential (adding features like outer joins and ), SQL:1999 (introducing object-relational extensions), and more recent versions up to SQL:2023, which enhances support for , property graphs, and temporal data. These standards, developed by ANSI's INCITS and ISO/IEC JTC 1/SC 32 committees, ensure portability across RDBMS vendors while allowing proprietary extensions like Oracle's or Microsoft's T-SQL. Key features of SQL include its declarative nature, where users specify what data is needed rather than how to retrieve it, enabling query optimizers to handle efficiency; support for ACID-compliant transactions to maintain ; and for handling large datasets in enterprise environments. It integrates seamlessly with programming languages like , , and R for applications in , , and , while security mechanisms such as and encryption protect sensitive information. Despite the rise of alternatives, SQL remains dominant in modern systems due to its maturity and widespread ecosystem.

History and Development

Origins in Relational Model

The origins of SQL trace back to Edgar F. Codd's seminal 1970 paper, "A Relational Model of Data for Large Shared Data Banks," which introduced the as a framework for managing large-scale data storage and retrieval. In this work, Codd proposed representing data as relations—essentially tables with rows and columns—drawing from mathematical to ensure , where changes to physical storage do not affect logical access. Central to the model was , a procedural comprising operations like restriction (selection of rows based on conditions), (selection of specific columns while eliminating duplicates), and join (combining relations on common attributes). These operations provided a formal basis for manipulating relations without navigating physical pointers, addressing limitations in earlier hierarchical and network models. Building directly on Codd's , IBM initiated the System R project in at its San Jose Research Laboratory to prototype a full management system (DBMS). The project team, including and , developed (Structured English QUEry Language) as the query interface, aiming to translate into an English-like, declarative syntax accessible to non-programmers. Unlike procedural languages that required specifying how to retrieve data, focused on what data was needed, allowing users to express queries without concern for storage details or execution paths. This approach supported querying, programmed transactions, and dynamic environments with and mechanisms. The name was later shortened to SQL in due to a trademark conflict with the UK-based aircraft company. Key early features of SEQUEL/SQL directly mirrored primitives, emphasizing simplicity and power for relational data manipulation. The SELECT statement implemented selection and , enabling users to filter rows by conditions and choose specific columns, as in retrieving employee names where exceeds a threshold. JOIN operations facilitated combining related tables, such as merging employee and department relations on a shared key, preserving Codd's emphasis on value-based associations over links. During System R's Phase Zero (1974–1975), an initial supported basic SELECT with subqueries, while full JOIN capabilities emerged in subsequent phases by , validating the language's viability for practical relational DBMS. These foundations established SQL as a user-friendly bridge between theoretical and real-world database applications.

Key Milestones and Implementations

The first commercial implementation of SQL arrived with Version 2 in 1979, marking the debut of a fully management system (RDBMS) available to businesses. Developed by Relational Software, Inc. (later renamed ), this release introduced SQL as a practical for structured , enabling efficient data retrieval and manipulation on minicomputers like the Digital Equipment Corporation's PDP-11. 's pioneering effort set the stage for SQL's transition from research prototype to enterprise tool, supporting early applications in inventory and financial systems. IBM followed with DB2 in 1983, a pivotal enterprise RDBMS that integrated SQL into mainframe environments, particularly on the operating system. Announced on June 7, 1983, and generally available in 1985, DB2 emphasized scalability and reliability for large-scale , becoming a cornerstone for banking and operations. Its adoption accelerated SQL's use in mission-critical workloads, with features like and driving compliance with emerging standards. Microsoft entered the SQL landscape in 1989 with SQL Server 1.0, initially a with Sybase and to port SQL functionality to and later Windows platforms. This release targeted mid-range servers, offering cost-effective alternatives to mainframe systems and facilitating SQL's penetration into personal computing and departmental applications. By providing tools for developers to build client-server architectures, SQL Server boosted SQL's accessibility for small to medium enterprises, evolving into a dominant force in Windows ecosystems. The open-source movement democratized SQL access in the mid-1990s, beginning with MySQL's founding in 1995 by David Axmark, Allan Larsson, and Michael "Monty" Widenius. The first stable release in May 1995 introduced a lightweight, multi-threaded RDBMS optimized for web applications, rapidly gaining traction among startups for its ease of deployment and zero-cost licensing. Similarly, emerged in 1996 as an evolution of the academic Postgres project, with version 6.0 renaming it to highlight SQL compliance while retaining advanced features like extensible types. These implementations lowered barriers to entry, enabling widespread experimentation and contributing to SQL's ubiquity in internet infrastructure. Standardization efforts culminated in ANSI's SQL-86 approval in 1986, the first formal specification (ANSI X3.135) that defined core for data definition and manipulation, adopted internationally by ISO in 1987. This standard spurred compliance, with early adopters like and aligning products to its entry-level requirements, reducing proprietary dialects and fostering interoperability. Over time, growing adherence—evident in certifications for SQL-89 revisions—encouraged broader , though full varied by . SQL played a central role in the late 1990s dot-com boom, powering the rapid scaling of web databases for and content management sites. As internet traffic surged, RDBMS like and handled dynamic queries for user sessions and transactions, supporting the era's "get big fast" strategies amid explosive venture funding. This period solidified SQL's position in high-volume environments, with adoption rates accelerating as companies built data-driven platforms. In the big data era, SQL adapted through integrations with distributed frameworks like Hadoop and , enabling queries over petabyte-scale datasets. Tools such as (introduced in 2008 but maturing in the ) provided SQL interfaces on Hadoop's HDFS, while SQL (released in 2014) offered for faster analytics on . These extensions preserved SQL's declarative paradigm, bridging traditional RDBMS with systems and facilitating hybrid architectures in cloud environments.

Evolution to Modern Standards

The SQL standards evolved iteratively through revisions managed by ANSI and ISO, incorporating enhancements to address growing data complexity and analytical needs. The inaugural SQL-86 standard, published by ANSI in 1986 and adopted by ISO in 1987, established core syntax for data (DDL) and (DML), including SELECT, INSERT, , DELETE, and basic elements like tables and views. SQL-89, a minor update in 1989, introduced integrity constraints such as primary keys, foreign keys, DEFAULT values, and CHECK conditions. , released in 1992, expanded query capabilities with explicit JOIN types (including outer joins), subqueries, set operations (UNION, INTERSECT, EXCEPT), and the CASE expression, while adding support for date/time data types and transaction isolation levels. Subsequent standards built on this foundation with advanced features. SQL:1999 introduced common table expressions (CTEs) enabling recursive queries, along with OLAP extensions like ROLLUP, CUBE, and GROUPING SETS for multidimensional analysis. SQL:2003 added window functions for row-based analytics, XML data type and querying support, and sequence generators. SQL:2008 incorporated temporal data handling, the MERGE statement for upsert operations, and the TRUNCATE TABLE command. SQL:2011 refined temporal features with period specifications and enhanced window framing options. SQL:2016 introduced row pattern recognition via the MATCH_RECOGNIZE clause for identifying sequences in result sets, initial JSON functions for document handling, and polymorphic table functions for dynamic schemas. The current SQL:2023 standard adds property graph queries (SQL/PGQ) as a new part for modeling and traversing graph data within relational tables, alongside a native JSON data type and expanded JSON operations. The advent of frameworks prompted SQL adaptations for distributed environments. , developed at and open-sourced in 2008, introduced HiveQL—a SQL dialect for querying petabyte-scale data stored in Hadoop Distributed File System (HDFS)—bridging traditional SQL with processing. , integrated into and first released in 2014, enabled SQL queries over structured data with in-memory computation, supporting complex analytics across clusters far beyond traditional RDBMS limits. Database vendors extended standards with proprietary innovations while pursuing compliance. attained full conformance in version 7.2, released in 2001, incorporating features like primary keys, quoted identifiers, and enhanced type casting. It later innovated with the JSONB type in version 9.4 (2014), a format for efficient storage and indexing, predating native standard support. Cloud-native services further modernized SQL by emphasizing serverless execution. Google BigQuery, announced in 2010 and generally available in 2011, pioneered a serverless using standard SQL to analyze terabytes of data without managing infrastructure. AWS Athena, launched in 2016, extended this model by allowing ad-hoc SQL queries on data in , leveraging Presto for federated access and pay-per-query pricing.

Standardization and Interoperability

Standardization Process

The standardization of SQL began with the (ANSI), which published the first formal SQL standard in 1986 as ANSI X3.135, aiming to establish a common language for relational database management systems amid growing proprietary implementations by vendors. This effort was adopted internationally by the (ISO) and the (IEC) in 1987 as ISO/IEC 9075, marking the start of ongoing global coordination. The primary responsibility for developing and revising SQL standards now lies with the ISO/IEC Joint Technical Committee 1 (JTC1) Subcommittee 32 (SC32) on and Interchange, specifically its Working Group 3 (WG3) on Database Languages. The standardization process is managed through collaborative drafting by WG3, which consists of representatives from national standards bodies, industry experts, and database vendors; this group convenes regular meetings—both in-person and virtual—to propose, debate, and refine technical specifications. Drafts undergo rigorous public review and balloting phases, where national member bodies submit comments and votes, often requiring multiple editing meetings to resolve issues before final approval by ISO. Major revisions typically occur in multi-year cycles of 3 to 5 years, though earlier intervals like the seven-year gap between and SQL:1999 reflect the complexity of achieving consensus on evolving features. For instance, the development of SQL:2023 involved over 30 WG3 meetings spanning several years, culminating in the submission of final text to the ISO Central Secretariat in early 2023. The core goals of this process are to promote portability and across database systems, thereby reducing by defining a baseline of consistent behavior that implementations can rely upon without dependencies. Standards distinguish between mandatory "Core SQL" features, which all conforming implementations must support, and optional "Enhanced SQL" features, allowing vendors flexibility for advanced capabilities while ensuring basic compatibility. In the , the push for arose directly from concerns over vendor-specific SQL dialects that locked users into particular systems, a problem addressed by ANSI's initial effort and amplified by the comprehensive revision, which achieved widespread adoption as vendors aligned their products with its entry-level requirements to demonstrate compliance. In recent cycles, the process has incorporated deliberations on modern data management needs, such as enhanced support for data handling and property graph queries, reflecting WG3's adaptation to contemporary applications like processing. These updates maintain the standard's relevance by balancing with incremental innovations, ensuring SQL remains a foundational technology for database .

Current SQL Standard (SQL:2023)

The SQL:2023 standard, officially known as ISO/IEC 9075:2023, was published by the (ISO) and the (IEC) in June 2023. This ninth edition of the SQL standard builds upon previous versions by introducing enhancements to support modern data models and querying paradigms. The standard is structured into 11 active parts, each addressing specific aspects of the language. For instance, Part 1 defines the framework, including grammar and processing rules; Part 2 covers the foundation, encompassing core data manipulation and features; Part 9 addresses management of external data; and Part 11 provides information and definition schemas. A notable addition is Part 16, dedicated to property graph queries. Key new features in SQL:2023 emphasize integration with contemporary data structures. The introduction of property graph queries in Part 16 allows users to model and traverse tabular data as property graphs using the clause, enabling efficient graph-based operations like path finding and without requiring a separate . Additionally, Part 2 enhances support with a native data type (feature T801), simplified path accessors, and item methods for constructing and manipulating values, including functions for dates and times that align with formatting conventions. The MERGE statement receives further refinements under optional feature F313, supporting more flexible conditional inserts, updates, and deletes in scenarios involving complex . These additions aim to bridge relational and non-relational paradigms while maintaining . Adoption of SQL:2023 remains partial across major database systems, as many features are optional and full compliance is uncommon due to implementation choices and performance considerations. 18, released in November 2025, builds on 16's (September 2023) initial enhancements, such as improved JSON functions and greatest/least aggregates, incorporating additional core aspects of the standard. 23ai (updated as of October 2025), succeeding the 23c preview from 2023, supports a broader range, including the new property graph queries via SQL/PGQ and advanced capabilities, positioning it as one of the more comprehensive implementations. However, no vendor achieves complete adherence to all optional features, leading to variations in supported syntax and behavior. Work on the next revision of the SQL standard is underway within ISO/IEC JTC1/SC32, expected in 3-5 years.

Compatibility Challenges Across Implementations

Despite efforts toward standardization, SQL implementations by major database vendors introduce compatibility challenges through proprietary extensions, selective adoption of optional features, and maintenance of legacy behaviors. Vendor extensions, such as Oracle's for and MySQL's distinct stored routine syntax, enhance functionality but diverge from the core SQL standard, complicating cross-database portability. Optional features in the SQL standard, like advanced window functions, are implemented inconsistently— for instance, some systems require specific syntax variations—while legacy support preserves older, non-standard behaviors to avoid breaking existing applications. Specific incompatibilities arise in common operations, including date handling, where systems differ in default formats and conversion rules. SQL Server relies on vendor-specific datetime types with implicit conversions that may lose precision when ported to systems like PostgreSQL, which adheres more closely to ISO 8601 standards for dates. Pagination syntax varies notably, with Microsoft SQL Server using the TOP clause (e.g., SELECT TOP 10 * FROM table) while MySQL and PostgreSQL employ LIMIT (e.g., SELECT * FROM table LIMIT 10), requiring query rewrites for interoperability. Regular expression support also differs: Oracle uses REGEXP_LIKE with POSIX-like patterns, SQL Server 2025 introduced REGEXP functions with its own dialect, and MySQL applies REGEXP with simpler Perl-compatible extensions, leading to pattern mismatches across platforms. To mitigate these issues, tools like SQLAlchemy provide an that generates dialect-specific SQL from neutral Python code, supporting over 20 databases since its initial release in 2005. Database migration services, such as Database Migration Service and Cloud's Database Migration Service, automate schema and query translations during transitions, handling dialect differences through built-in converters. A representative case study involves porting queries from , an with non-standard SQL extensions like flexible typing, to , an enterprise system enforcing stricter type rules and lacking SQLite's statements. Developers must rewrite SQLite-specific date functions (e.g., strftime) to DB2's TIMESTAMP_FORMAT and adjust for DB2's absence of , using FETCH FIRST instead, often requiring iterative testing to resolve syntax errors and discrepancies. Recent trends show increasing convergence through providers, where services like SQL for enable an ANSI SQL mode via the (e.g., setting ANSI_QUOTES), reducing reliance on vendor-specific quirks and promoting standard-compliant queries across environments.

Core Syntax and Components

Declarative Query Structure

SQL employs a declarative , allowing users to specify the desired output —what to retrieve—without prescribing the method of computation, leaving execution optimization to the database management system (DBMS). This non-procedural approach contrasts with imperative languages by focusing on set-oriented operations rather than row-by-row iteration, enabling concise expressions of complex queries. As introduced in the original , this facilitates interaction by non-specialists through simple block-structured English keywords. The foundational query in SQL is the SELECT statement, which follows the basic syntax: SELECT [DISTINCT] column_list FROM table_list [WHERE condition] [ORDER BY sort_expression] [GROUP BY grouping_expression]. This structure begins with selecting columns or expressions to include in the result (), identifies the source tables (), applies filtering conditions (restriction), and optionally sorts or groups the output. For instance, to retrieve names of employees earning more than $50,000 from an employees table, one might write:
SELECT name 
FROM employees 
WHERE salary > 50000 
ORDER BY name;
This query assumes familiarity with relational concepts such as tables (relations) and rows (tuples). SQL's declarative clauses map directly to operations: the SELECT clause corresponds to the operator (π), which eliminates unwanted columns; the FROM clause implies a (×) across tables, often combined with joins; and the WHERE clause implements the selection operator (σ), restricting rows based on predicates. These mappings, rooted in the , ensure that queries describe logical relations without procedural steps. The non-procedural design yields benefits including enhanced readability, as queries resemble natural language and are easy to maintain, and improved portability, allowing applications to transfer across compatible DBMS implementations with minimal syntax changes. These advantages stem from the standardized, set-based formulation that prioritizes clarity over algorithmic detail.

Data Definition Language (DDL)

Data Definition Language (DDL) encompasses the subset of SQL commands responsible for defining, modifying, and deleting database schemas and structures, enabling the creation of foundational elements like tables and their associated components. These commands are integral to the ANSI/ISO SQL standard (ISO/IEC 9075), with core DDL features originating from SQL-86 and remaining largely consistent through subsequent revisions, including SQL:2023. DDL's emphasis on schema management ensures that database objects adhere to relational principles, such as integrity constraints, without directly manipulating data content. Due to its high level of standardization, DDL statements exhibit strong portability across major RDBMS vendors like Oracle, PostgreSQL, and SQL Server, minimizing syntax variations for basic operations. The CREATE TABLE statement is the cornerstone of DDL, allowing users to define a new by specifying column names, data types, and optional to enforce . For instance, to create a "Customers" with an ID as the , a variable-length for the name, and an field with a NOT , the following syntax can be used:
sql
CREATE TABLE Customers (
    ID INT PRIMARY KEY,
    Name VARCHAR(100),
    Email VARCHAR(255) NOT NULL
);
This command establishes the table schema in compliance with SQL:86 standards, where ensures uniqueness and non-nullability for the ID column, while NOT NULL prevents empty values in the Email column. Constraints like (to prevent duplicate values in non-key columns), (to validate data against a condition, e.g., ensuring age > 0), and (to link tables via ) are also defined during table creation to maintain relational consistency. ALTER TABLE enables modifications to an existing table's structure, such as adding, dropping, or altering columns, or adjusting constraints, without affecting the underlying data. A common operation is adding a new column, as in:
sql
ALTER [TABLE](/page/Table) Customers ADD Phone [VARCHAR](/page/Varchar)(20);
This extends the dynamically, supporting operations like dropping a column (ALTER [TABLE](/page/Table) Customers DROP COLUMN Phone;) or adding a constraint to reference another table. Such alterations align with ISO/IEC 9075-2 (Foundation) requirements for schema evolution, though complex changes may require temporary tables in some implementations to preserve data. Conversely, the DROP TABLE command removes an entire table and its data irreversibly, as shown by:
sql
DROP TABLE Customers;
This operation, standardized since SQL-86, cascades to dependent objects if specified (e.g., DROP TABLE Customers CASCADE;), ensuring clean schema cleanup. Beyond tables, DDL includes commands for schema elements that enhance structure and . The CREATE INDEX statement builds an index on one or more columns to accelerate query retrieval, particularly for frequently searched fields; for example:
sql
CREATE INDEX idx_email ON Customers(Email);
Although not part of the core SQL standard, CREATE INDEX is widely supported across vendors for optional performance optimization, with as a common underlying structure. Similarly, CREATE VIEW defines a virtual table derived from a query, abstracting complex joins or filters without storing data physically:
sql
CREATE VIEW ActiveCustomers AS
SELECT ID, Name FROM Customers WHERE Status = 'Active';
Views, standardized since SQL-86, promote data abstraction and security by limiting access to subsets of tables, and they are highly portable due to their reliance on standard SELECT syntax. Overall, DDL's standardized syntax facilitates schema portability, with core commands achieving near-universal , though advanced may vary slightly by vendor implementation.

Data Manipulation Language (DML)

Data Manipulation Language (DML) encompasses the SQL statements responsible for modifying data within tables, including adding, altering, and removing records. These operations are essential for maintaining and updating database contents in a declarative manner, where the user specifies what changes to make without detailing how the executes them. Defined in the core of the SQL standard (ISO/IEC 9075-2: Foundation), DML statements form a foundational component of the language, enabling efficient data handling across compliant systems. The INSERT statement adds one or more new rows to a specified , either by providing explicit values or by selecting from another query. Its basic follows the form INSERT INTO table_name (column_list) VALUES (value_list);, allowing insertion of single or multiple rows in a single operation. For instance, to add an employee record, one might use INSERT INTO Employees (id, name, salary) VALUES (101, 'Alice Johnson', 75000);. This statement conforms to the SQL standard, with extensions like RETURNING for retrieving inserted available in some implementations but not part of the core specification. The statement modifies existing rows in a by changing the values of specified columns, typically conditioned on a to target specific records. The syntax is UPDATE table_name SET column1 = expression1, column2 = expression2, ... WHERE condition;, which updates only the rows matching the condition and leaves others unchanged. An example is UPDATE Employees SET salary = salary * 1.1 WHERE department = 'IT';, which increases salaries for IT department employees by 10%. adheres to the , supporting subqueries in the for complex conditional logic, such as referencing data from other tables. The DELETE statement removes rows from a based on a specified , emptying the table if no WHERE is provided. Its syntax is DELETE FROM table_name WHERE condition;, which deletes matching rows and preserves the table structure. For example, DELETE FROM Employees WHERE status = 'inactive'; removes all inactive employee records. This operation aligns with the SQL standard, and like , it integrates subqueries in the WHERE to enable deletions based on dynamic criteria from other sources. The MERGE statement, also known as UPSERT, combines INSERT, , and optionally DELETE operations into a single atomic statement, conditionally applying changes based on a join between source and target data. Introduced as an optional feature in SQL:2003 (ISO/IEC 9075-2), its syntax involves MERGE INTO target_table USING source_table ON join_condition WHEN MATCHED THEN [UPDATE](/page/Update) ... WHEN NOT MATCHED THEN INSERT ...;. A practical example is synchronizing customer data: MERGE INTO Customers c USING NewCustomers n ON c.id = n.id WHEN MATCHED THEN [UPDATE](/page/Update) SET c.email = n.email WHEN NOT MATCHED THEN INSERT (id, email) VALUES (n.id, n.email);. MERGE enhances efficiency for bulk tasks and supports subqueries in the source or conditions for refined matching. DML statements frequently integrate with SELECT through subqueries, allowing conditional modifications driven by queried data without needing separate transactions for reads and writes. For instance, an might use a subquery like UPDATE Employees SET manager_id = (SELECT id FROM Managers WHERE location = 'HQ') WHERE department = 'Sales'; to assign a specific manager based on a dynamic selection. This capability, part of the SQL standard, promotes concise and powerful data manipulation while referencing schema elements defined elsewhere.

Procedural and Control Features

Procedural Extensions (PL/SQL, T-SQL)

Procedural extensions to SQL introduce constructs, enabling developers to write complex, reusable code within the database environment. These extensions augment SQL's declarative nature by incorporating control structures such as loops and conditionals, , and modular code organization through procedures and functions. Primarily vendor-specific, they facilitate tasks like , encapsulation, and directly in the database, reducing the need for external application code. Other databases offer similar extensions, such as in , which implements much of the SQL/PSM standard. PL/SQL, Oracle's procedural language extension, structures code into anonymous or named blocks that promote modularity and error management. A typical PL/SQL block consists of a DECLARE section for variable and exception declarations, a BEGIN section for executable statements, an optional EXCEPTION section for handling errors, and an END keyword to close the block. This structure supports conditional logic via and CASE statements, as well as iterative control through , , and constructs, allowing repetition based on conditions or predefined ranges. For instance, a simple might iterate over a range to process records:
plsql
DECLARE
  counter NUMBER := 0;
BEGIN
  FOR i IN 1..5 LOOP
    counter := counter + i;
  END LOOP;
END;
Exception handling in PL/SQL uses predefined exceptions like NO_DATA_FOUND or user-defined ones declared in the DECLARE section, enabling graceful error recovery. T-SQL, Microsoft's extension for SQL Server, mirrors PL/SQL's procedural capabilities while introducing enhancements for robustness. It employs a similar block structure but emphasizes batch execution and integration with .NET via common language runtime (CLR). T-SQL supports conditionals through IF-ELSE and CASE expressions, and loops via WHILE statements, facilitating repetitive tasks like data transformation. A distinctive feature is the TRY-CATCH construct for error handling, which captures errors with severity greater than 10 that do not terminate the connection, allowing custom responses such as logging or rollback. An example TRY-CATCH block might look like:
sql
BEGIN TRY
  -- Executable statements, e.g., a risky [division](/page/Division)
  SELECT 1 / 0;
END TRY
BEGIN CATCH
  SELECT ERROR_MESSAGE() AS ErrorDetails;
END CATCH;
This mechanism improves code reliability by isolating error-prone operations. Stored procedures in these extensions encapsulate reusable SQL and procedural logic, defined using the CREATE PROCEDURE and invoked with EXECUTE or CALL. They accept input parameters, perform operations like data manipulation, and can return output via parameters or result sets, promoting by compiling once and executing multiple times. In Oracle and SQL Server T-SQL, procedures support transaction control and can be schema-qualified for . For example, a basic procedure to update employee salaries might be created as:
sql
CREATE PROCEDURE UpdateSalary @EmpID INT, @NewSalary DECIMAL(10,2)
AS
BEGIN
  UPDATE Employees SET Salary = @NewSalary WHERE EmployeeID = @EmpID;
END;
Execution would then use: EXEC UpdateSalary 123, 75000;. This modularity enhances maintainability and security by centralizing logic. Functions in procedural extensions return computed values, categorized as scalar or table-valued. Scalar functions return a single value, such as a string or number, and are often used in SELECT clauses for calculations like formatting dates. Table-valued functions, in contrast, return a result set resembling a table, enabling their use in JOINs or as data sources for further querying; they can be inline (single SELECT) or multi-statement for complex logic. In T-SQL, for instance:
sql
CREATE FUNCTION GetEmployeeDetails (@DeptID INT)
RETURNS TABLE
AS
RETURN (SELECT * FROM Employees WHERE DepartmentID = @DeptID);
This allows queries like SELECT * FROM GetEmployeeDetails(5);. Scalar functions, however, like one computing age from a birthdate, return one value per invocation: CREATE FUNCTION dbo.GetAge (@BirthDate DATE) RETURNS INT AS BEGIN RETURN DATEDIFF(YEAR, @BirthDate, GETDATE()); END;. These distinctions optimize performance, with table-valued functions generally preferred for set-based operations. Efforts to standardize procedural extensions culminated in SQL/PSM (Persistent Stored Modules), introduced in the SQL:1999 standard (ISO/IEC 9075-4:1999), which defines a portable syntax for stored procedures and functions using for local variables, BEGIN...END for blocks, and control structures like IF and . SQL/PSM aims to enable cross-database compatibility by specifying parameter modes (IN, OUT, INOUT), cursors, and handlers, though adoption varies, with vendors like DB2 and implementing subsets. This standard extends core SQL to support modular, without proprietary dialects.

Data Control Language (DCL) and Security

Data Control Language (DCL) encompasses SQL statements designed to manage access privileges and security within a database system, ensuring that users and roles can perform only authorized operations on database objects such as tables, views, and schemas. Introduced as part of the ANSI/ISO SQL standards, DCL commands like and REVOKE form the core mechanism for implementing fine-grained , preventing unauthorized data manipulation or exposure. These features are essential for maintaining and in multi-user environments, where different principals require varying levels of permission. The statement assigns specific privileges to users, roles, or the group, allowing operations such as SELECT, INSERT, , DELETE, or EXECUTE on database objects. For instance, the syntax GRANT SELECT ON employees TO user1; permits user1 to read data from the employees table without altering it. Privileges can be granted with the WITH OPTION clause, enabling the recipient to further delegate those permissions to others, which supports hierarchical access management in large systems. This command is compliant with ANSI/ISO SQL:2011, ensuring portability across conforming database management systems (DBMS). Conversely, the REVOKE statement withdraws previously granted privileges, immediately restricting access to specified objects. An example is REVOKE INSERT ON employees FROM user1;, which removes user1's ability to add records to the table. If privileges were granted with , revoking them propagates the removal to any dependent grantees; otherwise, prevents revocation if dependencies exist. Like , REVOKE adheres to ANSI/ISO SQL:2011 standards, providing a standardized way to dynamically adjust permissions without altering underlying data structures. Roles in SQL serve as named groups of privileges, simplifying administration by allowing permissions to be bundled and assigned collectively to users. The CREATE ROLE statement defines a new , such as CREATE [ROLE](/page/Role) analyst;, after which privileges can be granted to the role using . Users are then assigned roles via GRANT analyst TO user1;, activating the associated permissions upon login or via SET ROLE. This mechanism, part of SQL:1999 (ISO/IEC 9075), reduces redundancy in privilege management and supports scalable security models in enterprise databases. Roles can also be nested, where one role inherits privileges from another, enhancing flexibility for complex organizational hierarchies. Database systems provide built-in security mechanisms beyond DCL, including authentication to verify user identity before privilege evaluation. In Microsoft SQL Server, Windows Authentication integrates with operating system credentials for seamless, secure logins without storing passwords in the database. This mode leverages Kerberos or NTLM protocols to authenticate users, reducing exposure to credential compromise compared to SQL Server Authentication, which uses database-stored usernames and passwords. Row-level security (RLS) extends by restricting operations to specific rows based on context, often implemented through rather than coarse object-level grants. In , RLS is enabled on a with ALTER TABLE employees ENABLE ROW LEVEL [SECURITY](/page/Security);, followed by definitions like CREATE [POLICY](/page/Policy) analyst_policy ON employees FOR SELECT USING (department = current_[user](/page/User));, which limits visibility to rows matching the 's department. This feature, introduced in 9.5, enforces at the query execution layer, complementing DCL by preventing data leaks even if broader privileges are granted. can apply to SELECT, INSERT, UPDATE, or DELETE, with permissive or restrictive modes to combine multiple rules. In implementations like SQL Server, introduced in 2008, auditing capabilities allow tracking of database access and modifications to detect unauthorized activities or support . Audit specifications define events to monitor, such as logins or data changes, with actions grouped for efficiency. In implementations like SQL Server, database-level audit specifications target object-specific events, writing logs to files, the Windows Event Log, or security logs for review. This feature ensures comprehensive visibility into security-relevant operations without impacting query performance significantly.

Transaction Management

Transaction management in SQL provides mechanisms to group multiple operations into atomic units, ensuring and reliability in multi-user environments. These mechanisms allow database systems to handle concurrent access while maintaining consistency, particularly during failures or errors. SQL's transaction model supports the properties, which guarantee that transactions are processed reliably. The ACID properties—Atomicity, , , and —form the foundation of SQL transaction semantics. Atomicity ensures that a is treated as a single, indivisible unit: either all operations succeed, or none are applied, preventing partial updates. requires that a brings the database from one valid state to another, enforcing constraints such as primary keys and foreign keys. prevents concurrent from interfering with each other, allowing them to operate as if executed sequentially despite parallelism. guarantees that once a commits, its changes persist even in the event of system failures, typically achieved through and protocols. SQL defines standard commands to control transactions explicitly. The BEGIN TRANSACTION (or START TRANSACTION) statement initiates a new transaction, grouping subsequent DML statements until termination. The COMMIT command finalizes the transaction, making all changes permanent and visible to other users. Conversely, ROLLBACK undoes all changes since the transaction began, reverting the database to its pre-transaction state. These commands are part of the core SQL standard and are implemented across major DBMSs to enforce atomicity and consistency. To manage concurrency, SQL specifies four isolation levels in the SQL:1992 standard, balancing performance and anomaly prevention. READ UNCOMMITTED permits dirty reads, where a can view uncommitted changes from others, offering the lowest but highest concurrency. READ COMMITTED prevents dirty reads by ensuring reads only see committed data, though it allows non-repeatable reads and phantoms. REPEATABLE READ avoids dirty and non-repeatable reads by locking read rows, but phantoms may still occur. SERIALIZABLE provides the strictest , equivalent to serial execution, preventing all anomalies through full locking or equivalent mechanisms. Isolation levels are set via SET TRANSACTION ISOLATION LEVEL and help achieve the property of . For finer control within long transactions, SQL supports savepoints using the SAVEPOINT command, introduced as a standard feature to enable partial rollbacks. A savepoint marks a point in the ; ROLLBACK TO SAVEPOINT undoes changes only up to that point, preserving earlier work, while RELEASE SAVEPOINT removes the marker. This allows nested recovery without full transaction abortion, enhancing flexibility in complex operations. Deadlock handling in SQL is primarily DBMS-specific, as the standard does not mandate a uniform approach. Most systems detect deadlocks using wait-for graphs or periodic lock monitoring; upon detection, one is chosen as the , rolled back, and an is raised to the application. For example, SQL Server's deadlock monitor runs every 5 seconds by default, resolving cycles by terminating the with the least cost estimated via a formula considering log space and undo work. employs similar graph-based detection during lock waits, prioritizing selection based on session age or resource usage to minimize impact. Applications must handle deadlock s by retrying s, often with .

Data Types and Storage

Predefined Data Types

SQL's predefined data types, as specified in the ISO/IEC 9075 standard, provide the fundamental building blocks for defining columns, variables, and literals in relational databases, ensuring portability across compliant implementations. These types are categorized into numeric, character string, binary string, datetime, , and , with each designed to handle specific kinds of while enforcing constraints on , , and operations. The standard mandates support for these types to promote , though exact storage sizes and additional behaviors may vary by implementation.

Numeric Types

Numeric types in SQL store , , or floating-point values, divided into exact and approximate categories to preserve or allow for efficient representation of real numbers. Exact numeric types include , which represents with no and is typically implemented as a 32-bit signed ranging from -2,147,483,648 to 2,147,483,647, though the standard focuses on semantic rather than bit width. SMALLINT and BIGINT extend this for smaller and larger ranges, respectively, while (p,s) and NUMERIC(p,s) allow user-specified (p, total digits) and scale (s, digits after point) for , such as (10,2) for values up to 99999999.99. Approximate numeric types like (p) and REAL use floating-point representation for high-speed calculations, where p denotes the in bits, but they may introduce errors unsuitable for financial applications.

Character String Types

Character string types manage textual data, supporting fixed-length, variable-length, and large-object storage to accommodate everything from short identifiers to extensive documents. or allocates a fixed n characters of storage, padding with spaces if needed, ideal for codes like country abbreviations where length is constant. or stores up to n characters without padding, conserving space for variable-length strings such as names or addresses, with n often limited to 65,535 in practice. For oversized text, (Character Large Object) handles binary large objects exceeding typical string limits, up to gigabytes, enabling storage of articles or logs without length constraints. National character variants like NCHAR and NVARCHAR support for international text.

Date and Time Types

Datetime types capture temporal information, aligning with the and for consistent chronology across systems. DATE stores a year, month, and day (e.g., '2025-11-08'), spanning from 0001-01-01 to 9999-12-31. TIME(p) records hours, minutes, and seconds with optional fractional precision p (up to 6 digits for microseconds), such as '14:30:00.123456'. (p) combines DATE and TIME for full instants, like '2025-11-08 14:30:00.123456', and may include offsets in extended forms. These types, introduced in earlier standards and refined in SQL:1999, support arithmetic and comparisons for querying temporal data. qualifiers duration between points, such as YEAR TO MONTH or DAY TO SECOND, for expressions like 'INTERVAL 1 DAY'.

Binary String Types

Binary string types store sequences of bytes for non-textual data like images or encrypted content, distinct from character types by lacking encoding assumptions. BINARY(n) reserves fixed n bytes, similar to but without character interpretation. VARBINARY(n) holds variable-length binary data up to n bytes, efficient for hashes or keys. (Binary Large Object) accommodates massive binary payloads, such as files, without size limits in the standard, though implementations cap at terabytes. These types preserve exact byte sequences and support operations like .

Boolean Type

The BOOLEAN type, standardized in SQL:1999, represents logical values with literals TRUE, FALSE, or the null , enabling conditional expressions in queries and procedural code. It occupies minimal storage, often a single bit or byte, and integrates with operators like AND, OR, and NOT for truth evaluation. Unlike numeric approximations, BOOLEAN enforces strict , crucial for decision-making in database constraints. While the ISO standard defines these core types, vendors extend them for specialized needs; for instance, introduces TINYINT as a 1-byte for compact of small values from -128 to 127 (signed) or 0 to 255 (unsigned). Similarly, provides UUID, a 128-bit type for universally unique identifiers formatted as 8-4-4-4-12 digits, enhancing distributed system uniqueness. These extensions are defined in statements like CREATE TABLE.

User-Defined and Complex Types

SQL supports user-defined types (UDTs) to extend the predefined data types, allowing users to create custom types with specific behaviors and constraints tailored to application needs. These mechanisms, introduced in SQL:, enable the definition of as aliases for existing types with added constraints and distinct types that enforce by preventing implicit conversions from their base types. The CREATE TYPE statement is used to define these, either standalone or within a creation. For example, a for email addresses might be created as CREATE DOMAIN email AS VARCHAR(255) CHECK (VALUE LIKE '%@%.%');, ensuring validation at the type level. Distinct types, also defined via CREATE TYPE, are based on built-in types but treated as unique for operations, promoting strong typing; for instance, CREATE TYPE money AS DECIMAL(10,2); distinguishes monetary values from general decimals, requiring explicit casting for assignments. This feature enhances by avoiding unintended arithmetic or comparisons between incompatible values. Unlike domains, distinct types support definitions and can be used in table columns, parameters, or routines. Structured types in SQL:1999 allow composite data representation through row types, which aggregate multiple fields into a single value. A row type can be named using CREATE TYPE address AS ROW (street VARCHAR(100), city VARCHAR(50), zip INTEGER);, enabling structured storage for entities like addresses in columns or variables. These types support constructor functions for instantiation, such as address('123 Main St', 'Anytown', 12345), and can be nested or used in table definitions for object-relational mapping. Row types facilitate hierarchical data modeling without flattening into multiple columns. Collection types, including arrays and multisets, were added in SQL:1999 and SQL:2003, respectively, to handle variable-sized groupings of values. Arrays are ordered and fixed-size collections, declared as column_name INTEGER ARRAY[5], and support indexing like array_column[1] or construction via ARRAY[1, 2, 3]. Multisets, unordered and allowing duplicates, extend this for bag semantics, useful in aggregate queries or data import scenarios. Both can be used in table columns and manipulated with functions like CARDINALITY for size or set operations. SQL:2003 introduced built-in support for XML types via CREATE TYPE xml_type AS XML;, storing well-formed XML documents with validation against schemas and methods for querying or serialization, such as XMLSerialize or XMLQuery. This integrates semi-structured data into relational schemas, allowing XML columns in tables and functions like EXISTS NODE for XPath-based predicates. The type ensures type safety for XML operations, bridging relational and document-oriented paradigms. The SQL:2023 standard added JSON as a distinct data type, supporting storage, validation, and manipulation of JSON documents. Functions like JSON_VALUE extract scalars, JSON_QUERY retrieves objects or arrays, and JSON_MODIFY updates content, enabling hybrid relational-JSON workflows. For example, a column data JSON can hold {"name": "Alice", "age": 30}, queried via JSON_VALUE(data, '$.name'). This facilitates NoSQL-like flexibility within SQL environments. In implementations like PostgreSQL, object-oriented extensions allow custom composite types via CREATE TYPE point AS (x FLOAT, y FLOAT);, which can include methods and operators for domain-specific behavior, such as geometric calculations. These types support inheritance and can be used in arrays or as table row types, enhancing extensibility beyond the standard.

Handling Nulls and Duplicates

In SQL, null represents a special marker for unknown or missing data values, distinct from any actual value including zero or empty strings. This leads to a three-valued logic system in predicates and expressions, where results can be true, false, or unknown (often treated equivalently to false in WHERE clauses but distinct in other contexts). Null propagation occurs in comparisons, where any operation involving a (e.g., column = null) evaluates to rather than true or false, preventing unintended matches. In aggregate functions, nulls are generally ignored: for instance, COUNT(expression) excludes nulls while counting non-null values, SUM treats nulls as zero by skipping them, and AVG computes over non-null inputs only. This behavior ensures aggregates reflect meaningful data without distortion from absences, though an empty input set (no rows) typically yields null for most aggregates except COUNT(*), which returns zero. To explicitly test for nulls, SQL provides the IS NULL and IS NOT NULL operators, which return true or false without invoking —unlike equality checks. For substituting defaults, the standard COALESCE function returns the first non-null argument from a list, or null if all are null; for example, COALESCE(column1, column2, 'default') handles missing values in queries. Duplicates in query results arise from repeated row values and are managed using the DISTINCT keyword in SELECT statements, which eliminates identical rows from the output set. The GROUP BY clause further handles duplicates by partitioning rows into groups based on specified columns, often combined with aggregates to produce one row per unique combination. At the schema level, a UNIQUE constraint in (DDL) enforces no duplicate values in specified columns during inserts or updates, rejecting violations to maintain , though it allows multiple nulls since nulls are not considered equal. The following example illustrates null handling:
sql
SELECT COALESCE(salary, 0) AS adjusted_salary
FROM employees
WHERE department IS NOT [NULL](/page/Null);
This query substitutes 0 for null salaries and filters non-null departments. For duplicates:
sql
SELECT DISTINCT department
FROM employees;
This returns unique department names, removing repeats.

Advanced Querying Techniques

Joins and Subqueries

In SQL, joins are operations that combine rows from two or more tables based on related columns, enabling the retrieval of data across relational structures. The standard introduced explicit join syntax using the JOIN keyword and an ON clause to specify join conditions, replacing older implicit joins in the WHERE clause for better clarity and handling of outer joins. This syntax allows for precise control over which rows are included in the result set. The primary join types defined in the SQL-92 standard are INNER JOIN, OUTER JOIN variants (LEFT, RIGHT, and FULL), and CROSS JOIN. An INNER JOIN returns only rows where there is a match in both tables based on the ON condition, such as SELECT e.name, d.department_name FROM Employees e INNER JOIN Departments d ON e.dept_id = d.id, which retrieves employee names and their department names only for matching department IDs. LEFT OUTER JOIN includes all rows from the left table and matching rows from the right table, filling non-matches with NULLs in the right table's columns; for example, the above query with LEFT JOIN would show all employees, even those without assigned departments. RIGHT OUTER JOIN mirrors this but prioritizes the right table, while FULL OUTER JOIN includes all rows from both tables with NULLs for non-matches, though support for FULL OUTER JOIN varies across implementations as it is not universally required by the standard. CROSS JOIN produces a , pairing every row from the first table with every row from the second without a condition, resulting in m * n rows for tables of sizes m and n; it is useful for generating combinations but can produce large result sets. Subqueries, also known as nested queries, allow embedding one SELECT statement within another to perform complex filtering or computation. They are classified by their return value: scalar subqueries return a single value, row subqueries return a single row with multiple columns, and table subqueries return multiple rows and columns. Subqueries can appear in the SELECT list for computed columns, such as SELECT name, (SELECT AVG(salary) FROM Employees) AS avg_salary FROM Employees, yielding the average salary alongside each employee's name; in the WHERE clause for filtering, like SELECT * FROM Employees WHERE salary > (SELECT AVG(salary) FROM Employees), which finds employees above average salary; or in the FROM clause as a derived table, e.g., SELECT * FROM (SELECT * FROM Employees WHERE dept_id = 1) AS dept1_employees. Correlated subqueries reference columns from the outer query, making them execute once per outer row and dependent on its context. For instance, SELECT name FROM Employees e WHERE salary > (SELECT AVG(salary) FROM Employees e2 WHERE e2.dept_id = e.dept_id) identifies employees with salaries above their department's average, as the inner query correlates on dept_id. This differs from non-correlated subqueries, which run independently once. For performance, indexes on join columns accelerate matching by reducing scan times, and the query optimizer selects algorithms like nested loops for indexed small-to-large joins, merge joins for sorted data, or hash joins for large unsorted sets. Correlated subqueries may incur overhead from repeated execution, but optimizers often rewrite them as joins for efficiency.

Window Functions and Aggregations

Window functions, also known as analytic functions, enable computations across a set of rows related to the current row without collapsing the result set into groups, allowing for advanced analytical queries such as ranking, running totals, and moving averages. These functions were formally introduced in the standard (ISO/IEC 9075-2:2003), with subsequent enhancements in SQL:2008 and later revisions to support more flexible window framing and grouping options. Unlike traditional aggregate functions that require a GROUP BY clause to summarize data, window functions preserve all rows in the output while adding computed values, making them essential for in relational databases. The core syntax for window functions involves an OVER clause that defines the window for the computation, typically structured as OVER (PARTITION BY column ORDER BY column [frame_specification]). The PARTITION BY clause divides the result set into partitions (subsets of rows) based on one or more columns, similar to GROUP BY but without aggregation ; if omitted, the entire result set forms a single partition. The ORDER BY clause within OVER sorts rows within each partition, which is required for functions and influences the frame for ordered aggregates; it supports ascending or descending order, with options for handling nulls. The optional frame specification, introduced and refined in SQL:2003 and SQL:2008, delimits the rows considered in the window using ROWS, RANGE, or GROUPS modes to specify boundaries like UNBOUNDED PRECEDING, CURRENT ROW, or value-based offsets. Ranking functions such as ROW_NUMBER() and RANK() assign sequential numbers or ranks to rows within a , ordered by specified criteria. For example, to rank sales records by amount within each region, a query might use SELECT region, salesperson, sales_amount, ROW_NUMBER() OVER (PARTITION BY region ORDER BY sales_amount DESC) AS rank FROM sales_table;, which assigns unique numbers starting from 1 to each row per region, preserving all original rows. RANK() behaves similarly but assigns the same rank to tied values and skips subsequent numbers (e.g., 1, 2, 2, 4), useful for identifying top performers without gaps. Aggregate window functions extend standard aggregates like (), AVG(), and COUNT() over a window to compute values such as running totals or moving averages while retaining row-level detail. For running totals, SUM(sales_amount) OVER (PARTITION BY region ORDER BY sale_date ROWS UNBOUNDED PRECEDING) calculates the cumulative sum from the partition start to the current row for each sale. Moving averages can be computed using frame specifications, such as AVG(sales_amount) OVER (ORDER BY sale_date ROWS BETWEEN 2 PRECEDING AND CURRENT ROW) to average the current and two prior values, enabling without subqueries. Frame specifications provide precise control over the boundaries, with ROWS defining physical offsets (e.g., number of rows) and using logical offsets based on the ORDER BY values (e.g., values within a time range). frames include ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW for cumulative computations from the partition's start, or ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING for centered moving windows; the default frame is RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW, which includes all rows from the start up to peers of the current row. These options, expanded in SQL:2008, allow for efficient handling of time-series data, such as cumulative sums in financial reporting. A key distinction from GROUP BY is that window functions operate on the full result set post-filtering and joining, applying computations per row without reducing the output cardinality, whereas GROUP BY aggregates rows into summary groups, eliminating individual details. This row-preserving behavior enables complex analytics in a single query, avoiding the need for multiple self-joins or subqueries that GROUP BY alone would require for similar results.

Common Table Expressions (CTEs)

Common Table Expressions (CTEs) are temporary named result sets defined within the scope of a single SQL statement, allowing for more modular and readable query construction by breaking down complex logic into reusable components. Introduced as part of the SQL:1999 standard, CTEs use the WITH clause to define one or more subqueries that can be referenced in the main query, improving maintainability without creating permanent objects in the database. The basic syntax for a CTE is WITH cte_name [(column_list)] AS (subquery) statement, where the statement is typically a SELECT, INSERT, , or DELETE that references the CTE. Non-recursive CTEs enhance query readability by substituting for subqueries or views in multi-step computations, such as generating reports that involve intermediate aggregations. For instance, a can first compute total sales by region and then use that result to top-performing areas in the outer query, avoiding nested subqueries and making the logic easier to follow. This approach is particularly useful in analytical queries where breaking down steps clarifies intent without impacting the overall execution plan in most database management systems (DBMS). Recursive CTEs, also standardized in SQL:1999, extend this capability to handle hierarchical or tree-structured data by allowing the CTE to reference itself through a UNION ALL construct, enabling iterative expansion until a termination condition is met. The structure typically includes an member (initial query) followed by a recursive member that joins back to the CTE, as in WITH RECURSIVE cte_name (columns) AS (anchor_query UNION ALL recursive_query) SELECT ... FROM cte_name. A common application is traversing an employee reporting : the anchor selects top-level managers, and the recursive part appends subordinates by joining on the manager ID column, producing a complete organizational . Another representative example is a (BOM) explosion, where the anchor identifies root assemblies and the recursive member unfolds component subassemblies level by level, revealing the full for or . This recursion supports basic patterns, such as path finding in networks, by accumulating paths or levels in additional columns during . Despite their benefits, CTEs have limitations that vary by DBMS implementation. In some systems like , non-recursive CTEs are materialized—computed once and stored temporarily—which can lead to unnecessary computation if the result is referenced multiple times without optimization, potentially degrading performance for large datasets. Recursive CTEs may also face depth limits (e.g., 100 levels in SQL Server) to prevent infinite loops, and their optimization relies on the query planner, which might not inline them as efficiently as equivalent subqueries in all cases. Additionally, CTEs cannot have indexes or constraints applied directly, limiting their use in scenarios requiring temporary table-like persistence.

Distributed and Extended SQL

Distributed Query Processing

Distributed query processing in SQL enables the execution of queries across multiple nodes or databases, facilitating in large-scale relational systems by distributing data and computation. This approach addresses the limitations of single-node databases by partitioning workloads, allowing for parallel execution and improved throughput. In environments, query optimizers generate execution plans that involve data movement, local on , and result aggregation at a coordinator node. Systems like SQL and distributed RDBMS such as or Oracle Sharding implement these mechanisms to handle high-volume queries efficiently. Sharding, or horizontal partitioning, is a core technique in where large tables are divided into smaller, self-contained subsets called , each stored on separate servers to balance load and enhance . is typically partitioned using a shard key—such as a hash of a column value or a range—to ensure even distribution across nodes; for instance, hash sharding in maps keys to one of 64,000 tablets via a ranging from 0x0000 to 0xFFFF. Queries targeting sharded involve shard , where the optimizer identifies relevant to minimize scanned, followed by parallel execution on each and aggregation of results. This method improves performance by reducing contention and allowing linear scaling with added nodes, though it requires careful key selection to avoid hotspots. Sharding, for example, processes multi-shard queries by rewriting them into independent subqueries executed on each , maintaining consistency via a global system change number (SCN). Distributed joins in SQL systems like extend traditional join operations across clusters by employing strategies that account for data locality and network costs. The , Spark's default for equi-joins on large datasets, involves shuffling both relations by join keys, sorting partitions on each node, and merging matching records in a distributed manner to produce the final result. For cases where one relation is small, the broadcast replicates the smaller to all nodes, building local tables for efficient probing against the larger, unshuffled , thus avoiding costly shuffles. Shuffle joins, an adaptive alternative, partition data by keys and build in-memory tables post-shuffle, converting to sort-merge if partitions exceed size thresholds (e.g., via spark.sql.adaptive.maxShuffledHashJoinLocalMapThreshold). These strategies optimize for cluster resources, with broadcast joins triggered automatically for tables under 10 MB by default. SQL standards and extensions support distributed querying through features like federated tables, which allow transparent access to remote data sources. In , the FEDERATED storage engine enables creation of local tables that proxy remote MySQL tables, executing queries by forwarding them over a connection without storing data locally; for example, a CREATE TABLE statement includes a CONNECTION string specifying the remote , user, and database. This facilitates distributed queries across heterogeneous or remote instances, though it requires enabling the engine via startup options like --federated and incurs overhead for each operation. Extensions to the SQL standard, such as the interface, incorporate two-phase commit (2PC) for coordinating distributed transactions, ensuring atomicity by dividing the process into a prepare phase—where participants vote on commit readiness—and a commit phase—where the coordinator broadcasts the decision, with all nodes logging outcomes to handle failures. 2PC, originally formalized in systems like NonStop SQL, guarantees that transactions either commit fully or abort entirely across nodes, though it introduces coordinator bottlenecks. Challenges in distributed SQL query processing stem primarily from network latency and fault tolerance requirements. Data movement across nodes, such as during shuffles or federated queries, amplifies , potentially degrading performance for latency-sensitive applications; mitigation involves optimizing join strategies to minimize transfers, like preferring broadcast over shuffle when feasible. is achieved through replication, where data are duplicated across nodes (e.g., synchronous or asynchronous modes) to ensure during failures, but this trades off and added overhead—synchronous replication reduces latency variance at the cost of throughput. Systems balance these via configurable levels, such as Oracle's MULTISHARD_QUERY_DATA_CONSISTENCY parameter, which allows trading freshness for speed in read-heavy workloads.

SQL in NoSQL and Hybrid Systems

In non-relational and hybrid database environments, SQL adaptations enable querying diverse data stores while leveraging familiar relational paradigms, bridging the gap between structured querying and scalable, schema-flexible architectures. These adaptations often involve query languages or engines that translate SQL-like constructs into underlying or distributed processing operations, facilitating analytics on large-scale, heterogeneous data without full relational enforcement. SQL-on-Hadoop systems exemplify early efforts to apply SQL to non-relational frameworks. , introduced in 2008 by and later donated to , provides HiveQL, a SQL-like language that compiles queries into jobs for processing structured data stored in Hadoop's HDFS. This approach allows users to perform data warehousing tasks, such as ETL and ad-hoc querying, on petabyte-scale datasets without rewriting applications in lower-level paradigms like Java . Similarly, Presto, open-sourced in 2013 by , serves as a query engine optimized for interactive analytics across federated data sources, including Hadoop, supporting low-latency queries on diverse formats like and without materializing intermediate results. NewSQL systems extend SQL compatibility into distributed, non-relational contexts by combining transactions with horizontal scalability. , launched as an open-source project in 2015, implements a database inspired by Google's Spanner, using a key-value store foundation to ensure serializable isolation and fault tolerance across clusters while supporting standard wire protocol for seamless application migration. , released in 2016 by PingCAP, offers MySQL-compatible with strong consistency guarantees, separating compute and storage layers to handle both OLTP and OLAP workloads on commodity hardware. These systems prioritize relational semantics in hybrid setups, enabling geo-distributed deployments without sacrificing transactional integrity. NoSQL databases incorporate SQL layers to enhance query expressiveness while retaining non-relational benefits like schema flexibility. Apache Cassandra's Cassandra Query Language (CQL), introduced in 2011, provides a SQL-inspired syntax for defining tables, inserting data, and executing SELECT statements on its , though limited to partition-key-based access patterns to maintain . MongoDB's , added in 2.2 in 2012, emulates SQL aggregation through a sequence of stages like $match (for filtering), $group (for grouping and aggregation), and $project (for ), allowing complex data transformations on document collections without joins, thus mimicking GROUP BY and HAVING clauses in a denormalized environment. Hybrid systems yield benefits such as combining schema evolution from with SQL's declarative querying, reducing developer friction in scenarios. For instance, , launched in 2012 as a fully managed , employs PostgreSQL-compatible SQL on columnar storage integrated with S3 for petabyte-scale analytics, enabling hybrid workloads that blend relational queries with ingestion. This familiarity accelerates adoption in environments mixing OLAP with lakes. However, trade-offs persist between consistency models and performance. NoSQL SQL layers often embrace for availability and partition tolerance under constraints, potentially leading to stale reads during network partitions, whereas NewSQL hybrids like CockroachDB enforce via consensus protocols like , incurring higher latency for strict . These choices balance against reliability, with hybrid designs favoring tunable consistency to suit application needs.

Object-Relational and Spatial Extensions

SQL:1999 introduced object-relational features to enhance the relational model with object-oriented capabilities, including user-defined types (UDTs) that allow structured types with attributes and methods, and reference (REF) types that act as pointers to rows in typed tables. REF types enable the creation of relationships between objects by referencing instances in other tables, supporting dereferencing operations to access related data directly. UDTs can define methods, such as observer methods like EQUAL and LESSTHAN, which operate on instances of the type to compare or manipulate object states. Commercial implementations extended these concepts; for instance, 's ANYDATA type, introduced in Oracle 9i, provides a self-describing container that can hold instances of any built-in or user-defined type along with its type descriptor, facilitating dynamic handling of heterogeneous data. implements table inheritance through the INHERITS clause in CREATE TABLE, allowing child tables to automatically include columns and constraints from a parent table, enabling hierarchical where queries on the parent can transparently access data from subtypes. For spatial data, the SQL/MM Spatial standard (ISO/IEC 13249-3) defines a framework for managing geospatial information within SQL databases, introducing geometry types such as ST_Point, ST_LineString, and ST_Polygon to represent spatial objects. It specifies routines like ST_Distance, which computes the shortest distance between two geometries, and ST_Intersects, which determines if two spatial objects overlap—essential for (GIS) queries, such as identifying parcels intersecting a given boundary. A prominent vendor extension is , released in 2001 by Refractions Research as an add-on to , which implements SQL/MM Spatial alongside Open Geospatial Consortium standards, adding support for spatial indexing via GiST and advanced functions for raster and vector data analysis.

Alternatives and Criticisms

Alternative Query Languages

provides the theoretical foundation for SQL, consisting of primitive operations such as selection (σ), projection (π), and join (⋈) that enable the and querying of relational data. These operations form the mathematical basis upon which SQL queries are constructed, translating declarative statements into executable plans. However, expressions are often verbose and require explicit specification of intermediate results, making them less practical for direct user interaction compared to SQL's more concise syntax. Query-by-Example (QBE) emerged as a visual alternative to textual query languages in the 1970s, developed by Moshe M. Zloof at and first described in 1977. QBE allows users to construct queries by filling in a skeletal table template with example values or conditions, facilitating intuitive without writing code; for instance, entering a partial row like "Smith" under a "Name" column retrieves matching records. This approach was later adopted in tools like , where the query grid implements QBE principles to simplify database interactions for non-programmers. In environments, domain-specific languages have arisen to handle non-relational data structures more naturally. , introduced by in 2011, is a declarative tailored for property graphs, using ASCII-art patterns like MATCH (a:Person)-[:KNOWS]->(b:Person) RETURN a.name to traverse nodes and relationships efficiently. Similarly, 's (ArangoDB Query Language), designed for multi-model databases supporting documents, graphs, and key-value stores, extends SQL-like syntax with operations for heterogeneous data, such as FOR doc IN documents FILTER doc.type == "graph" RETURN doc. Modern alternatives integrate query capabilities directly into programming languages. (), released by in 2007 as part of C# 3.0 and .NET Framework 3.5, embeds SQL-like expressions within code using syntax like from p in products where p.Price > 10 select p, enabling type-safe queries over in-memory collections, databases, or XML without context switching. SQL's declarative nature, where users specify what data is desired without detailing how to retrieve it, contrasts with alternatives like , another declarative logic-based language that excels in recursive queries over deductive databases using rules such as ancestor(X,Y) :- parent(X,Y). ancestor(X,Z) :- parent(X,Y), ancestor(Y,Z). While both avoid procedural steps, Datalog's rule-oriented approach offers greater expressiveness for inference-heavy tasks, though it lacks SQL's widespread optimization for large-scale relational storage and joins. In visual or embedded contexts like QBE and , alternatives prioritize usability over SQL's standardized verbosity, whereas graph-focused languages like emphasize structural traversal beyond flat relations.

Design and Theoretical Criticisms

SQL's design has been subject to theoretical criticisms since its early development, primarily for deviating from the principles of the proposed by E.F. Codd and for introducing inconsistencies that hinder expressiveness and reliability. Critics argue that while SQL aimed to provide a user-friendly interface to relational databases, its ad hoc features and implementation compromises resulted in a that is neither fully orthogonal nor complete, leading to unnecessary complexity in query formulation. These flaws stem from SQL's origins in the 1970s System R project at , where practical usability often trumped theoretical purity. A key issue is SQL's non-orthogonality, where features overlap redundantly or inconsistently, violating the principle that language constructs should be independent and composable without side effects. For instance, the FULL OUTER JOIN operation can be simulated using a of LEFT OUTER JOIN and RIGHT OUTER JOIN results, rendering the dedicated FULL OUTER JOIN syntax superfluous and increasing the risk of inconsistent implementations across dialects. Similarly, table expressions lack recursive nesting; a query like SELECT EMP# FROM (NYC UNION SFO) must be rewritten as SELECT EMP# FROM NYC SELECT EMP# FROM SFO, breaking the expected of relational operations. Built-in functions exacerbate this, as aggregates like cannot directly nest over subqueries without awkward reformulations, limiting modularity. These redundancies stem from SQL's piecemeal , making the language harder to learn and extend. SQL also suffers from a lack of in supporting core relational operations, requiring vendor-specific extensions for full expressiveness. It does not natively provide INTERSECT or EXCEPT set operators in early standards, forcing users to emulate them via subqueries or JOINs, which undermines closure. Moreover, SQL lacks built-in support for certain relational concepts like explicit foreign keys or domain constraints beyond basic types, compelling extensions such as constraints or procedural code to enforce integrity. This incompleteness means simple relational tasks, like computing set differences without duplicates, demand verbose workarounds, deviating from the model's goal of operability on relations. Standardization efforts have added some operators in later versions (e.g., SQL:1999), but core gaps persist without full adherence to relational theory. The verbosity of SQL queries represents another theoretical shortcoming, as even straightforward operations require excessive syntax that amplifies the potential for errors. Basic table retrieval mandates SELECT * FROM T rather than simply T, and aggregations demand repeating grouped attributes in both SELECT and GROUP BY clauses, such as SELECT r_regionkey, r_name, COUNT(*) FROM region GROUP BY r_regionkey, r_name. Window functions lack a direct filtering mechanism akin to HAVING for groups, necessitating subqueries for post-window filtering, as in SELECT o_custkey, rk FROM (SELECT o_custkey, RANK() OVER (ORDER BY o_totalprice) rk FROM orders) t WHERE rk < 4. This prolixity, driven by SQL's ambition for declarative readability, instead burdens users with boilerplate, increasing cognitive load and error rates in complex queries. With over 600 keywords in recent implementations like PostgreSQL (as of version 18), the language's bulk further complicates mastery. Historical baggage in SQL manifests as reserved words and legacy syntax that conflict with user identifiers, reflecting compromises from its codename origins and early hardware constraints. Words like SELECT, ORDER, and GROUP are , requiring delimiters (e.g., double quotes or brackets) for use as table or column names, such as CREATE TABLE "order" (...), which disrupts natural naming and portability across dialects. This issue arises from SQL's evolution without a clean slate, where keywords accumulated without , leading to conflicts in real-world schemas. The uppercase convention for keywords, a holdover from terminals lacking lowercase support, adds to the perception of an outdated design. Finally, SQL deviates from Codd's relational rules, particularly Rule 5, which mandates a comprehensive sublanguage supporting , , , constraints, and in both interactive and programmatic modes with linear syntax. While SQL provides DDL and DML via standalone and embedded forms, it falls short on uniform —rules are , allowing updates only for simple views and prohibiting them for aggregates without extensions—and lacks native enforcement of relational like foreign keys. Codd himself viewed SQL as flawed for permitting duplicates and incomplete support, compromising the model's logical foundation.

Practical Limitations and Impedance Mismatch

The object-relational impedance mismatch arises from fundamental differences between the used in SQL databases and the object-oriented prevalent in modern application development languages. In relational databases, data is organized into tables with rows and columns, enforcing to avoid redundancy, whereas object-oriented programs represent data as interconnected objects with , encapsulation, and polymorphism. This gap complicates the mapping of complex object hierarchies to flat table structures, often requiring manual conversions that increase development effort and introduce errors. A prominent example of this mismatch is the handling of one-to-many relationships: an object might contain a collection of child objects, but in SQL, these must be queried separately from parent records, leading to fragmented data access patterns. differences further exacerbate the issue, as objects may aggregate data from multiple tables, while in lacks direct equivalents in relational schemas without complex joins or single-table inheritance strategies. These discrepancies result in for and deserialization, hindering productivity in applications built with languages like or C#. The N+1 query problem exemplifies a practical bottleneck stemming from this mismatch, particularly in object-relational mapping () tools. When an application fetches a list of N parent entities and then iterates over them to load associated child entities , it executes one initial query followed by N additional queries—one per parent—resulting in excessive database round-trips and degraded throughput. In Hibernate, for instance, default of associations like @ManyToOne or @OneToMany triggers this issue unless explicitly mitigated, potentially multiplying query overhead in loops over result sets. SQL's emphasis on ACID (Atomicity, Consistency, , Durability) properties ensures transactional integrity but poses challenges for horizontal scalability in distributed environments. Traditional relational databases are optimized for vertical scaling via larger hardware, as ACID compliance relies on centralized locking and two-phase commits that become inefficient across multiple nodes without partitioning. Achieving horizontal scaling often requires manual sharding—dividing data across independent database instances—which complicates cross-shard transactions and can violate isolation guarantees unless using advanced techniques like distributed protocols. For example, Sharding maintains ACID while enabling linear scalability, but it demands careful shard key selection to minimize inter-shard joins. Performance pitfalls in SQL queries frequently stem from unintended Cartesian products and index misuse, amplifying resource consumption. A Cartesian product occurs when joins lack proper conditions, producing the cross-product of row counts from involved tables—for instance, joining two tables with 1,000 rows each without an ON clause yields 1,000,000 rows, overwhelming memory and execution time. Missing explicit join predicates, such as omitting WHERE filters in multi-table queries, can inadvertently cause this explosion, as seen in legacy code or ad-hoc reports. Index misuse compounds these issues by failing to accelerate query paths effectively. Creating indexes on low-selectivity columns or neglecting composite indexes for frequent join conditions leads to full scans, where the database reads unnecessary data rows. For example, indexing only a single column in a multi-column WHERE forces sequential scans instead of seeks, inflating I/O costs; recommendations include analyzing query patterns with tools like execution plans to target high-cardinality columns. To mitigate these limitations, ORMs such as and Hibernate abstract the impedance mismatch by automating object-to-table mappings and providing mechanisms to optimize queries. reduces the paradigm gap through features like LINQ-to-SQL translation, which generates efficient parameterized queries and supports eager loading via Include() to preempt issues. Similarly, query planners in modern DBMS, such as PostgreSQL's genetic query optimizer, automatically select optimal execution paths, including hash joins to avoid Cartesian products and index-only scans for better performance. These tools, when configured with fetch strategies like JOIN FETCH in Hibernate, enable developers to balance usability and efficiency without raw SQL verbosity.

References

  1. [1]
    What Is Structured Query Language (SQL)? - IBM
    Structured query language (SQL) is a domain-specific, standardized programming language used to interact with relational database management systems ...Missing: authoritative | Show results with:authoritative
  2. [2]
    The SQL Standard - ISO/IEC 9075:2023 (ANSI X3.135)
    Oct 5, 2018 · In 1986, the SQL language became formally accepted, and the ANSI Database Technical Committee (ANSI X3H2) of the Accredited Standards Committee ...
  3. [3]
    A relational model of data for large shared data banks
    A relational model of data for large shared data banks. Readings in database systems (3rd ed.) Read More. Comments.
  4. [4]
    The relational database - IBM
    A group of programmers in 1973 undertook an industrial-strength implementation: the System R project. The team included Chamberlin and Boyce, as well as ...Missing: SEQUEL | Show results with:SEQUEL
  5. [5]
    Introduction to Oracle Database
    In 1979, RSI introduced Oracle V2 (Version 2) as the first commercially available SQL-based RDBMS, a landmark event in the history of relational databases.
  6. [6]
    50 years of the relational database - Oracle
    Feb 19, 2024 · ... first commercial relational database management system (DBMS), Oracle Version 2, in 1979. ... release. “Developers now have a head start to ...
  7. [7]
    Celebrating 40 years of Db2: Running the world's mission critical ...
    IBM Db2 was announced in 1983, and V1.1 launched on MVS in 1985. Don Haderle, a retired IBM Fellow and considered to be the “father of Db2,” viewed 1988 as ...
  8. [8]
    A Short History of DB2 for z/OS - Database Trends and Applications
    Sep 10, 2014 · DB2 Version 1 Release 1 was announced on June 7, 1983, and it became generally available on Tuesday, April 2, 1985. I wonder if it was ready on ...
  9. [9]
    New Video: The History of SQL Server - Microsoft
    Feb 15, 2012 · The history of SQL Server dates back to 1989 when the product came about as a result of a partnership between Microsoft, Sybase, and Ashton-Tate.
  10. [10]
    What Is SQL Server? - SQL Server - Microsoft Learn
    Sep 8, 2025 · Microsoft SQL Server is a relational database management system (RDBMS). Applications and tools connect to a SQL Server instance or database, and communicate ...Sql Server Components And... · Fundamental Concepts · Azure Integration
  11. [11]
    Documentation: 18: 2. A Brief History of PostgreSQL
    POSTGRES has undergone several major releases since then. The first “demoware” system became operational in 1987 and was shown at the 1988 ACM-SIGMOD Conference ...
  12. [12]
    MySQL Retrospective - The Early Years - Oracle Blogs
    Dec 1, 2024 · MySQL was founded in 1995 by Allan Larsson, David Axmark, and Michael "Monty" Widenius. The first version of MySQL was released in May 1995.
  13. [13]
    The History of SQL Standards | LearnSQL.com
    Dec 8, 2020 · The first SQL standard was SQL-86. It was published in 1986 as ANSI standard and in 1987 as International Organization for Standardization (ISO) standard.SQL-86 · SQL-89 · SQL-92 · SQL:1999
  14. [14]
    History of the SQL Standard - SQL in a Nutshell, 3rd Edition [Book]
    In response to the proliferation of SQL dialects, ANSI published its first SQL standard in 1986 to bring about greater conformity among vendors.Missing: 86 | Show results with:86<|separator|>
  15. [15]
    [PDF] SQL: The Next Big Thing in SCADA - Amazon S3
    In the 1990s and early 2000s, SQL databases were firmly established as a new standard in the informa- tion technology industry. With the Dot-Com Age in full ...
  16. [16]
    A brief history of databases: From relational, to NoSQL, to distributed ...
    Feb 24, 2022 · In the 1980s and '90s, relational databases grew increasingly dominant, delivering rich indexes to make any query efficient. Table joins, the ...
  17. [17]
    [PDF] Fast and Interactive Analytics over Hadoop Data with Spark - USENIX
    (Interfaces in Java and SQL are also in the works.) In addition, Spark interoperates cleanly with Hadoop, in that it can read or write data from any storage ...
  18. [18]
    A Brief History of SQL and the Rise of Graph Queries - Neo4j
    Jun 20, 2024 · SQL, originally developed by IBM in the early 1970s, has undergone significant evolution over the decades. It has been standardized by both the ...
  19. [19]
    What's New in SQL:2016
    Jun 15, 2017 · Instead, the standard uses strings to store JSON data. Update: ISO/IEC 9075-2:2023 introduced a JSON type (T801). Note that this does not ...
  20. [20]
    SQL:2023 is finished: Here is what's new | Peter Eisentraut
    Apr 4, 2023 · There is now a JSON data type. Many implementations already have this. SQL:2016 only had JSON operations on JSON data stored in character ...<|control11|><|separator|>
  21. [21]
    Hive - A Petabyte Scale Data Warehouse using Hadoop
    Jun 10, 2009 · Hive is an open source, peta-byte scale date warehousing framework based on Hadoop that was developed by the Data Infrastructure Team at ...
  22. [22]
    [PDF] Spark SQL: Relational Data Processing in Spark - People | MIT CSAIL
    Spark SQL was released in May 2014, and is now one of the most actively developed components in Spark. As of this writing, Apache Spark is the most active open ...
  23. [23]
    [PDF] PostgreSQL 7.2 Tutorial
    • Additional SQL92-compliant language features have been added, including primary keys, quoted identifiers, literal string type coercion, type casting, and ...
  24. [24]
    An overview of BigQuery's architecture and how to quickly get started
    Sep 2, 2020 · Google BigQuery was released to general availability in 2011 and is Google Cloud's enterprise data warehouse designed for business agility.
  25. [25]
    What is Amazon Athena? - Amazon Athena - AWS Documentation
    Amazon Athena is an interactive query service that makes it easy to analyze data directly in Amazon Simple Storage Service (Amazon S3) using standard SQL.When should I use Athena? · Ways to use Athena · AWS service integrations
  26. [26]
    Database languages - JTC 1
    The two most well-known and impactful standards produced by WG3 are ISO/IEC 9075 Database Language SQL and ISO/IEC 39075 Database Language GQL. Members. WG3 ...
  27. [27]
    [PDF] SQL Standardization: The Next Steps - SIGMOD Record
    The committee responsible for SQL standardization (ISO/IEC JTC1/SC32/WG3) has held three Editing Meetings to resolve those comments, the final one in January, ...
  28. [28]
    [PDF] SQL STANDARDS UPDATE - Amazon S3
    Oct 20, 2017 · In the international arena, the SQL Standard is developed by ISO/. IEC JTC1 SC32 WG3. • Officers: • Convenor – Keith W. Hare – USA.
  29. [29]
    What is ANSI SQL and why it matters - CelerData
    Aug 1, 2024 · SQL-86 (SQL-1). Year: 1986; Significance: The first version of SQL standardized by ANSI and ISO. Features: Basic data definition (CREATE, DROP) ...
  30. [30]
    Documentation: 18: Appendix D. SQL Conformance - PostgreSQL
    SQL-92 defined three feature sets for conformance: Entry, Intermediate, and Full. Most database management systems claiming SQL standard conformance were ...Missing: 2001 | Show results with:2001
  31. [31]
    Standard ANSI SQL: What It Is and Why It Matters - DbVisualizer
    Rating 4.6 (146) · $0.00 to $229.00 · DeveloperLet's learn everything you need to know about ANSI SQL, the standardized version of the SQL language followed by multiple databases.
  32. [32]
    Is SQL a Real Standard Anymore? - TDAN.com
    Apr 1, 2001 · Since virtually all the vendors applied for and were tested only at SQL92 Entry level, conformance at any level above that is unknown. In column ...Missing: compliance | Show results with:compliance<|separator|>
  33. [33]
    [PDF] SQL Standards Update - 2016 - W3C
    Jan 22, 2019 · • Row Pattern Recognition. • JSON. • Polymorphic Table Functions. • Additional analytics. • Multi Dimensional Arrays – SQL/MDA. • Property Graph ...
  34. [34]
    Database languages SQL - ISO/IEC 9075-1:2023
    In stockThis document describes the conceptual framework used in other parts of the ISO/IEC 9075 series to specify the grammar of SQL and the result of processing ...Missing: deliberations integration
  35. [35]
    Announcing the General Availability of the SQL:2023 Standard
    Jun 2, 2023 · The 11 parts of the SQL:2023 Standard are published and available for purchase in the ISO Store: ISO/IEC 9075-1:2023 Information technology — ...Missing: AI/ ML integration
  36. [36]
    F313, “Enhanced MERGE statement” - Modern SQL
    F313, “Enhanced MERGE statement” is an optional feature in ISO/IEC 9075-2:2023. It first appeared in ISO/IEC 9075-2:2008.
  37. [37]
    Oracle Database 23c: The Next Long Term Support Release
    Sep 19, 2023 · New SQL functions and operators will make it easier to create, manipulate, and query vectors in combination with other data models already ...Missing: compliance | Show results with:compliance
  38. [38]
    Glossary - Oracle Help Center
    SQL dialect​​ A variation or extension of SQL implemented by a database vendor. When migrating client applications from third-party databases to Oracle, all non- ...Missing: differences | Show results with:differences
  39. [39]
    2 SQL Translation Framework Overview - Oracle Help Center
    This is because SQL dialect varies among vendors of database technologies and different vendors use different syntaxes to express SQL queries and statements.Missing: differences | Show results with:differences
  40. [40]
  41. [41]
    ALTER DATABASE (Transact-SQL) compatibility level
    An example of a breaking change protected by compatibility level is an implicit conversion from datetime to datetime2 data types. Under database compatibility ...
  42. [42]
    How to use SQL to Query Dates Greater Than a Specific Date
    Jun 28, 2024 · This tutorial will provide a comprehensive guide to performing date comparisons in SQL across different SQL database systems, including SQL Server, PostgreSQL, ...
  43. [43]
    SQL Pagination with LIMIT and OFFSET - Pete Freitag
    Aug 29, 2005 · This is similar to the TOP clause on Microsoft SQL Server. However the LIMIT clause always goes at the end of the query on MySQL, and PostgreSQL ...Missing: differences | Show results with:differences
  44. [44]
    11 Using Regular Expressions in Database Applications
    A regular expression specifies a search pattern, using metacharacters (which are, or belong to, operators) and character literals (described in Oracle Database ...
  45. [45]
    Work with Regular Expressions - SQL Server - Microsoft Learn
    A regular expression, or regex, is a sequence of characters that defines a search pattern for text. Regex is commonly used for a variety of tasks.
  46. [46]
    SQL Regex: Syntax, Usage, and Examples - Mimo
    Use regex in SQL to match, validate, and extract text patterns. Supports MySQL, PostgreSQL, and Oracle with varying syntax.
  47. [47]
    Features - SQLAlchemy
    The Core is itself a fully featured SQL abstraction toolkit, providing a smooth layer of abstraction over a wide variety of DBAPI implementations and behaviors ...
  48. [48]
    Migrate Databases by Using the Azure SQL Migration Extension for ...
    Oct 28, 2025 · Azure Database Migration Service is free to use. You can migrate multiple SQL Server databases by using Database Migration Service at no charge.
  49. [49]
    Database Migration Service for SQL Server - Google Cloud
    You can migrate SQL Server databases to Cloud SQL for SQL Server instances with Database Migration Service by using a backup and restore flow.
  50. [50]
    Migrating data from SQLite to IBM DB2 | DBSofts
    This guide outlines how to efficiently migrate data from SQLite to IBM DB2 using our wizard-based toolkit.Missing: case study
  51. [51]
    Configure database flags | Cloud SQL for MySQL
    This page describes how to configure database flags for Cloud SQL, and lists the flags that you can set for your instance. You use database flags for many ...Missing: convergence | Show results with:convergence
  52. [52]
    [PDF] SEQUEL: A Structured English Query Language
    ABSTRACT: In this paper we present the data manipulation facility for a structured English query language (SEQUEL) which can be used for accessing.
  53. [53]
    [PDF] Relational Algebra and SQL - Cornell: Computer Science
    v Basic operations: – Selection ( ) Selects a subset of rows from relation. – Projection ( ) Deletes unwanted columns from relation.<|control11|><|separator|>
  54. [54]
    [PDF] Relational Algebra and SQL Relational Query Languages
    This is the simplest evaluation algorithm for SELECT. • Relational algebra expressions are procedural. Which of the two equivalent expressions is more easily ...
  55. [55]
    [PDF] SQL for MySQL Developers - CS and ECE Course Home Pages
    ... non- procedural, we mean that users (with the help of statements) have to ... portability—An application could be developed for one data- base server ...
  56. [56]
    Transact-SQL statements - SQL Server
    ### Summary of DDL Statements
  57. [57]
  58. [58]
    The 16 Parts of the SQL Standard ISO/IEC 9075
    Since SQL:1999 the standard is divided into several parts numbered from 1 though 16 (as of SQL:2023). ... Merge · Generated always as (…) Blaze-Persistence ...<|control11|><|separator|>
  59. [59]
    INSERT
    ### Summary of INSERT Statement from PostgreSQL Documentation
  60. [60]
    UPDATE
    ### Summary of UPDATE Statement in PostgreSQL
  61. [61]
    DELETE
    ### Summary of DELETE Statement in PostgreSQL
  62. [62]
    How to Use MERGE in SQL Query Statements: Complete Guide
    Rating 4.6 (146) · $0.00 to $229.00 · DeveloperYes, the MERGE statement is part of the SQL ISO/ANSI standard. It was introduced in SQL:2003 and extended even further in SQL:2008. As of this writing, the ...
  63. [63]
    MERGE
    ### Description
  64. [64]
    Main Features of PL/SQL - Oracle Help Center
    A PL/SQL block is defined by the keywords DECLARE , BEGIN , EXCEPTION , and END . These keywords divide the block into a declarative part, an executable part, ...Blocks · Input And Output · Data AbstractionMissing: documentation | Show results with:documentation
  65. [65]
    TRY...CATCH (Transact-SQL) - SQL Server - Microsoft Learn
    Nov 22, 2024 · A TRY...CATCH construct catches all execution errors that have a severity higher than 10 that don't close the database connection.Syntax · Arguments
  66. [66]
    4 PL/SQL Control Statements
    The conditional selection statements are IF and CASE . Loop statements, which run the same statements with a series of different data values. The loop ...
  67. [67]
    CREATE PROCEDURE (Transact-SQL) - SQL Server - Microsoft Learn
    Sep 29, 2025 · CREATE PROCEDURE creates a Transact-SQL or common language runtime (CLR) stored procedure.<|control11|><|separator|>
  68. [68]
    User-Defined Functions - SQL Server - Microsoft Learn
    Jul 18, 2025 · A user-defined function takes zero or more input parameters and returns either a scalar value or a table. A function can have a maximum of 1,024 ...
  69. [69]
    [PDF] SQL:1999, formerly known as SQL3
    As we shall show, SQL:1999 is much more than merely SQL-92 plus object technology. It involves additional features that we consider to fall into SQL's ...
  70. [70]
    ISO/IEC 9075-4:1999 - Information technology — Database languages
    Database languages — SQLPart 4: Persistent Stored Modules (SQL/PSM). Withdrawn (Edition 2, 1999).
  71. [71]
    REVOKE Statement | SQL Data Control Language | Teradata Vantage
    REVOKE (Role Form) and REVOKE (SQL Form) are ANSI/ISO SQL:2011-compliant. The other forms of REVOKE are extensions to the ANSI/ISO SQL:2011 standard.
  72. [72]
    CREATE ROLE - Teradata Vantage - Analytics Database
    This statement is ANSI SQL:2011 compliant. Required Privileges. You must have the CREATE ROLE privilege to create a standard database role or EXTERNAL role.
  73. [73]
    Choose an Authentication Mode - SQL Server | Microsoft Learn
    Jun 30, 2025 · During setup, you must select an authentication mode for the Database Engine. There are two possible modes: Windows Authentication mode and mixed mode.
  74. [74]
    Documentation: 18: 5.9. Row Security Policies - PostgreSQL
    Row security policies in PostgreSQL restrict which rows can be accessed or modified by users, using a default-deny policy if no policy exists.
  75. [75]
    Documentation: 18: CREATE POLICY - PostgreSQL
    The CREATE POLICY command defines a new row-level security policy for a table, granting permission to select, insert, update, or delete rows.
  76. [76]
    SQL Server Audit (Database Engine) - Microsoft Learn
    Jun 11, 2025 · SQL Server Audit provides the tools and processes you must have to enable, store, and view audits on various server and database objects.Missing: ISO | Show results with:ISO
  77. [77]
    [PDF] Jim Gray - The Transaction Concept: Virtues and Limitations
    This paper restates the transaction concepts and attempts to put several implementation approaches in perspective. It then describes some areas which require ...
  78. [78]
    Understanding transactions in the operating system context
    D Durability: the effects of a transaction that has completed will not be undone by a failure. In order to understand these properties more clearly, I will ...
  79. [79]
    15.3.1 START TRANSACTION, COMMIT, and ROLLBACK Statements
    START TRANSACTION or BEGIN start a new transaction. · COMMIT commits the current transaction, making its changes permanent. · ROLLBACK rolls back the current ...
  80. [80]
    MySQL 8.4 Reference Manual :: 17.7.2.1 Transaction Isolation Levels
    InnoDB offers all four transaction isolation levels described by the SQL:1992 standard: READ UNCOMMITTED, READ COMMITTED, REPEATABLE READ, and SERIALIZABLE.
  81. [81]
    Documentation: 18: SAVEPOINT - PostgreSQL
    A savepoint is a special mark inside a transaction that allows all commands that are executed after it was established to be rolled back.
  82. [82]
    Deadlocks Guide - SQL Server - Microsoft Learn
    Oct 10, 2025 · A deadlock occurs when two or more tasks permanently block each other by each task having a lock on a resource that the other tasks are trying to lock.
  83. [83]
    [PDF] ANSI/ISO/IEC International Standard (IS) Database Language SQL
    12) Clause 12, ''Access control'', defines facilities for controlling access to SQL-data. 13) Clause 13, ''SQL-client modules'', defines SQL-client modules and ...
  84. [84]
    Documentation: 18: 8.12. UUID Type - PostgreSQL
    The data type uuid can be used to store any UUID, regardless of the origin and the UUID version. A UUID is written as a sequence of lower-case hexadecimal ...
  85. [85]
    Chapter 27 – User-defined Types - SQL 99
    To create a UDT, use the CREATE TYPE statement (either as a stand-alone SQL statement or within a CREATE SCHEMA statement). CREATE TYPE specifies the enclosing ...
  86. [86]
    Chapter 11 – Row types - SQL 99 - Read the Docs
    In SQL, a <row type> is a row of data: a composite constructed SQL <data type>. A row in a Table is an instance of a <row type> and every row of the same Table ...Missing: structured | Show results with:structured
  87. [87]
    [PDF] SQL 2003 Standard Support in Oracle Database 10g
    Part 14 provides detailed definition of a new XML type, the values of an XML type, mappings between SQL constructs and XML constructs, and functions for ...
  88. [88]
    [PDF] Advancements in SQL/XML - SIGMOD Record
    In SQL/XML:2003, the XML type was identified using only the keyword XML. By contrast, in the second edition, one can identify several variations: XML(SEQUENCE),.
  89. [89]
    Documentation: 18: CREATE TYPE - PostgreSQL
    The `CREATE TYPE` command defines a new data type for use in the current database, registering it for the current database.
  90. [90]
    Documentation: 18: 9.18. Conditional Expressions - PostgreSQL
    Null is returned only if all arguments are null. It is often used to substitute a default value for null values when data is retrieved for display, for example:
  91. [91]
    International Standard ISO/IEC 9075:1992
    19)Clause 19, "Embedded SQL", defines syntax for embedding SQL in certain standard programming languages. ... SQL statement). In the Access Rules, the term shall ...
  92. [92]
  93. [93]
    Joins (SQL Server) - Microsoft Learn
    Aug 21, 2025 · Inner joins can be specified in either the FROM or WHERE clauses. Outer joins and cross joins can be specified in the FROM clause only. The join ...
  94. [94]
    MySQL 8.4 Reference Manual :: 15.2.15 Subqueries
    A subquery can return a scalar (a single value), a single row, a single column, or a table (one or more rows of one or more columns). These are called scalar, ...
  95. [95]
    Subqueries (SQL Server) - Microsoft Learn
    Aug 21, 2025 · A subquery is a query that is nested inside a SELECT, INSERT, UPDATE, or DELETE statement, or inside another subquery.Missing: scalar | Show results with:scalar
  96. [96]
    Using Subqueries - Oracle Help Center
    A correlated subquery answers a multiple-part question whose answer depends on the value in each row processed by the parent statement. For example, you can use ...
  97. [97]
  98. [98]
    SQL Language Reference
    ### Summary of Analytic Functions from Oracle Database 19c SQL Reference
  99. [99]
    Window Functions in SQL - Simple Talk - Redgate Software
    Oct 31, 2013 · Windowing functions were added to the ANSI/ISO Standard SQL:2003 and then extended in ANSI/ISO Standard SQL:2008. Microsoft was late to this ...<|control11|><|separator|>
  100. [100]
    9.22. Window Functions
    ### Summary of Window Functions in PostgreSQL Documentation
  101. [101]
    WITH common_table_expression (Transact-SQL) - SQL Server
    Jun 25, 2025 · A CTE must be followed by a single SELECT , INSERT , UPDATE , MERGE , or DELETE statement that references some or all the CTE columns.
  102. [102]
    Working with CTEs (Common Table Expressions)
    A CTE (common table expression) is a named subquery defined in a WITH clause. You can think of the CTE as a temporary view for use in the statement that ...Recursive Ctes And... · Examples · Recursive Cte With Indented...
  103. [103]
    Understanding SQL Server Recursive CTE By Practical Examples
    A recursive CTE is useful in querying hierarchical data such as organization charts where one employee reports to a manager or multi-level bill of materials.
  104. [104]
    SQL Server CTE : Usage, Features, and Limitations - C# Corner
    A CTE is a temporary named result set in SQL Server, used in SELECT, INSERT, UPDATE, or DELETE statements, and is defined with a WITH clause.When To Use Ctes? · Multiple Ctes · Recursive Cte Example
  105. [105]
    Performance Tuning - Spark 4.0.1 Documentation
    AQE converts sort-merge join to shuffled hash join when all post shuffle partitions are smaller than the threshold configured in spark.sql.adaptive.
  106. [106]
    How Data Sharding Works in a Distributed SQL Database - Yugabyte
    Jun 6, 2019 · Sharding is the process of breaking up large tables into smaller chunks called shards that are spread across multiple servers.
  107. [107]
    Query Processing for Multi-Shard Queries - Oracle Help Center
    A multi-shard query is a query that must scan data from more than one shard, and the processing on each shard is independent of any other shard.
  108. [108]
    Sharding pattern - Azure Architecture Center | Microsoft Learn
    Sharding divides a data store into horizontal partitions or shards, each holding a distinct subset of data, improving scalability.Context and problem · Solution
  109. [109]
  110. [110]
    [PDF] Structured Query Language (SQL) Version 2 - The Open Group
    May 5, 2011 · SQL Technical Corrigendum, December 1994, to ISO/IEC 9075: 1992, Information ... two-phase commit. A protocol for ensuring that a ...
  111. [111]
    What is distributed SQL? - Aerospike
    Distributed SQL's coordination (2PC, consensus) adds overhead, so throughput and latency for a given workload might be lower than an eventually-consistent NoSQL ...<|control11|><|separator|>
  112. [112]
    Apache Hadoop 0.19.0 Release Notes
    Aug 20, 2025 · Introduced Hive Data Warehouse built on top of Hadoop that enables structuring Hadoop ... SQL like language using a command line interface.<|separator|>
  113. [113]
    Metadata Caching in Presto: Towards Fast Data Processing
    Presto is an open-source distributed SQL query engine for OLAP, aiming for “SQL on everything”. Since open-sourced in 2013, Presto has been consistently ...
  114. [114]
    Presto: Free, Open-Source SQL Query Engine for any Data
    Presto lets you query massive datasets across multiple data sources with sub-second performance. Whether it's ad hoc analytics or powering real-time apps,See how Presto works · Docs · Community · Videos Archive - PrestoDBMissing: history 2013
  115. [115]
    The new stack: Meet CockroachDB, the resilient SQL database
    Oct 30, 2015 · CockroachDB is a distributed SQL database built on top of a transactional and consistent key-value store.Missing: NewSQL | Show results with:NewSQL
  116. [116]
    CockroachDB: The Resilient Geo-Distributed SQL Database
    Jun 14, 2020 · We describe how. CockroachDB replicates and distributes data to achieve fault tolerance and high performance, as well as how its distributed.
  117. [117]
    The Unified Database for Modern Workloads - TiDB
    Whether you want it fully managed by PingCAP, or deployed on your infrastructure, TiDB offers maximum flexibility for every business model. TiDB Cloud. TiDB ...A Cloud-Native, Modular... · Advanced Features That Set... · Deploy Tidb Your Way
  118. [118]
    TiDB System Properties - DB-Engines
    Technical documentation, docs.pingcap.com/tidb/stable ; Developer, PingCAP, Inc. ; Initial release, 2016 ; Current release, 8.2.0, July 2024 ; License info ...
  119. [119]
    Cassandra Query Language (CQL) v3.4.3
    This document describes the Cassandra Query Language (CQL) version 3. CQL v3 is not backward compatible with CQL v2 and differs from it in numerous ways.
  120. [120]
    Introduction to Cassandra Query Language | CQL for Cassandra 3.x
    The most basic way to interact with Cassandra is using the CQL shell, cqlsh. Using cqlsh, you can create keyspaces and tables, insert and query tables, plus ...
  121. [121]
    SQL to Aggregation Mapping Chart - Database Manual - MongoDB
    Nov 2, 2012 · The following table provides an overview of common SQL aggregation terms, functions, and concepts and the corresponding MongoDB aggregation operators.Missing: like | Show results with:like
  122. [122]
    Amazon Redshift – The New AWS Data Warehouse
    Nov 28, 2012 · Amazon Redshift is a massively parallel, fully-managed data warehouse service, designed for data sets from hundreds of gigabytes to several petabytes in size.Missing: hybrid | Show results with:hybrid
  123. [123]
    Architecting Solutions for Scalable Databases in Cloud
    Aug 10, 2015 · The main objective of this paper is to keep the strengths of RDBMSs as consistency and ACID properties and at the same time providing the ...Missing: Trade- offs SQL<|control11|><|separator|>
  124. [124]
    Chapter 12 – Reference types - SQL 99 - Read the Docs
    A REF value might have a scope: it determines the effect of a dereference operator on that value. A REF value's scope is a list of Base table names and consists ...Missing: 1999 | Show results with:1999
  125. [125]
    [PDF] Object-Relational Databases - Stanford InfoLab
    Order Methods: SQL-99. ◇Each UDT T may define two methods called EQUAL and LESSTHAN. ◇ Each takes an argument of type T and is applied to another object of type ...
  126. [126]
    ANYDATA TYPE - Oracle Help Center
    An ANYDATA TYPE contains an instance of a given type, plus a description of the type. In this sense, an ANYDATA is self-describing. An ANYDATA can be ...
  127. [127]
    Documentation: 18: 3.6. Inheritance - PostgreSQL
    Inheritance is a concept from object-oriented databases. It opens up interesting new possibilities of database design. Let's create two tables: A table ...
  128. [128]
    ISO/IEC 13249-3:2011 - Spatial
    Database languages — SQL multimedia and application packages — Part 3: Spatial.
  129. [129]
    (PDF) SQL/MM Spatial: Manage Spatial Data in Relational Database
    The functionality is also standardized in ISO/IEC 13249 SQL/MM. This pa- per presents part 3 of the standard and discusses it critically. The spatial data types ...
  130. [130]
    PostGIS History - Refractions Research
    PostGIS 0.7​​ In 2002, the PostgreSQL team released version 7.2, which substantially changed the GiST index bindings in previous PostGIS versions, and forced us ...
  131. [131]
    [PDF] Relational Algebra II
    Drawback is that it is very verbose! • Database systems usually provide much simpler query languages. – Most popular by far is SQL, the Structured Query.
  132. [132]
  133. [133]
    [PDF] QUERY-BY-EXAMPLE (QBE) - cs.wisc.edu
    QBE is especially suited for queries that are not too complex and can be expressed in terms of a few tables. QBE, like SQL, was developed at IBM and QBE is an ...
  134. [134]
    Access grid queries are QWE? - Microsoft Q&A
    Oct 9, 2016 · I'm intrigued why Microsoft touts Access's grid query interface as Query by Example (QBE). Where please is the 'example'? QBE was invented ...
  135. [135]
    Overview - Cypher Manual - Neo4j
    What is Cypher? Cypher is Neo4j's declarative graph query language. It was created in 2011 by Neo4j engineers as an SQL-equivalent language for graph databases.
  136. [136]
    Unified Query Language (AQL) - ArangoDB
    ArangoDB's unified query language enables developers to address diverse use cases seamlessly. They can work with various data models in a single query, ...
  137. [137]
    The Evolution Of LINQ And Its Impact On The Design Of C#
    LINQ is a series of language extensions that supports data querying in a type-safe way; it will be released with the next version Visual Studio, code-named ...
  138. [138]
    [PDF] Datalog and SQL
    View definitions are the counterpart to procedures (or methods) in imperative programming. • If using views properly, each view is a declarative specification ...
  139. [139]
    [PDF] Other Relational Query Languages
    These two formal languages form the basis for two more user-friendly languages, QBE and Datalog, that we study in this chapter. UnlikeSQL,QBE is a graphical ...
  140. [140]
    [PDF] A Critique of the SQL Database Language - Duke Computer Science
    Specifically,. SQL is far easier to use than those older languages; as a result, users in a SQL system (both end-users and application programmers) can be far ...
  141. [141]
    We Can Do Better Than SQL | Gel Blog
    May 9, 2019 · In addition to keyword proliferation, the orthogonality issues discussed above make queries more verbose and harder to read. Lack of Consistency ...Missing: flaws | Show results with:flaws
  142. [142]
    [PDF] A Critique of Modern SQL And A Proposal Towards A Simple and ...
    Therefore, we believe that (extended) relational algebra with its explicit opera- tor ordering is a good foundation for a modern query language. Note that to ...
  143. [143]
    CS Trivia#7: Why is SQL shouting at me? - DEV Community
    Mar 23, 2020 · Historical context. SQL was created in 1974 when apparently majority of the keyboards did not have lowercase letters. Moreover, there were only ...
  144. [144]
    What are C.J. Date's criticisms of SQL? - Quora
    Feb 6, 2015 · CJ Date wrote a whole book of criticisms of SQL while proposing a more relationally correct language in D (data language specification). The ...Is it worth reading 'SQL and Relational Theory' by C.J. Date ... - QuoraWhat are some criticisms of the Structured Query Language (SQL)?More results from www.quora.com
  145. [145]
  146. [146]
    Investigating the Effects of Object-Relational Impedance Mismatch ...
    The object-relational impedance mismatch (ORIM) problem characterises differences between the object-oriented and relational approaches to data access.
  147. [147]
  148. [148]
  149. [149]
    Nested Loops, ORMs and the N+1 Problem in ORMs
    This effect is known as the “N+1 selects problem” or shorter the “N+1 problem” because it executes N+1 selects in total if the driving query returns N rows.
  150. [150]
    Overview of Oracle Sharding
    Sharding provides linear scalability and complete fault isolation for the most demanding applications. ... ACID properties and read consistency. SQL and other ...
  151. [151]
    ACID at Scale: Why MySQL Needs a Distributed SQL Alternative
    May 24, 2025 · Built-in Horizontal Scalability: These systems feature automatic data sharding and rebalancing, allowing seamless scaling by adding nodes ...
  152. [152]
    Data Points: Five Ways to Rev up Your SQL Performance
    Oct 23, 2019 · A Cartesian product can return all customers for all months. A Cartesian product basically multiplies the first table by the second table and ...
  153. [153]
    Poor database indexing - a SQL query performance killer
    Apr 14, 2014 · In this article, we will explain what poor indexing is and give general recommendations for table indexing and selecting the right columns to create indexes on.
  154. [154]
    Overview of Entity Framework 6 - EF6 - Microsoft Learn
    Oct 14, 2020 · As an O/RM, EF6 reduces the impedance mismatch between the relational and object-oriented worlds, enabling developers to write applications ...Missing: ORM mitigation
  155. [155]