Fact-checked by Grok 2 weeks ago

Apache Drill

Apache Drill is an open-source distributed massively parallel processing (MPP) query engine designed for interactive analysis and exploration of large-scale, semi-structured without requiring predefined schemas. It supports standard ANSI SQL queries across diverse data sources, including Hadoop file systems, databases, and , enabling low-latency querying on formats such as , , , text files, HBase, tables, and others like Kafka streams, ESRI Shapefiles, files, and OpenTSDB. Inspired by Google's system, Drill employs a columnar storage model with to handle complex, nested data structures efficiently, making it suitable for data-intensive applications in environments like Hadoop ecosystems. As an Apache top-level project that entered the incubator in 2012, became a top-level project in December 2014, and reached version 1.0 in 2015, Drill emphasizes plug-and-play integration with existing infrastructure, such as Apache Hive and HBase, while providing JDBC and ODBC drivers for compatibility with business intelligence tools and SQL clients. Its architecture features a distributed execution engine that scales horizontally across clusters, supporting dynamic schema discovery for self-describing data and alternative query languages beyond SQL. Notable for its focus on high-performance analytics on evolving datasets, Drill avoids the need for data warehousing or ETL processes, allowing users to query raw data directly from storage systems.

Introduction

History and Development

Apache Drill's origins trace back to Google's 2010 paper, which introduced a scalable system for ad-hoc querying of large datasets using a query engine. This inspiration led to the development of an open-source equivalent, aiming to enable interactive analysis on petabyte-scale data without predefined schemas. Initial development began at MapR Technologies in 2012, with the project announced as an open-source initiative in August of that year to replicate Dremel's capabilities for Hadoop and environments. MapR released the first public version, MapR Drill 0.1, in October 2012, shortly after donating the codebase to as an incubating project in August 2012. The project graduated to a top-level Apache project in December 2014, reflecting its maturity and growing adoption. Key early contributors included the MapR team, led by founder Tomer Shiran, who served as the project's initial architect and vice president of product management at MapR. In 2019, MapR Technologies was acquired by (HPE), transitioning Drill's primary stewardship to the broader Apache community while HPE integrated related technologies into its data fabric offerings. Major milestones include the 1.0 release in May 2015, which introduced schema-free SQL querying for Hadoop, , and . Subsequent versions added dynamic user-defined functions in 1.9 (November 2016), cryptographic functions and PCAP file support in 1.11 (July 2017), and the latest 1.22.0 in June 2025, featuring bug fixes, DNS lookup functions, support, and removal of Java 8 compatibility. Under the 2.0, Drill has seen sustained community growth, with contributions from organizations like Dremio—founded by Shiran—accelerating after 2020 through enhancements to its query engine and integration with modern big data ecosystems like Hadoop.

Overview and Purpose

is an open-source, schema-free SQL query engine designed for interactive analysis of large-scale, multi-structured data without requiring ETL preprocessing. It supports standard ANSI SQL for querying diverse data sources, including Hadoop, databases, and , enabling ad-hoc exploration of petabyte-scale datasets across thousands of servers. This distributed architecture allows users to perform low-latency queries directly on raw or semi-structured data, such as , , and text files, without predefined schemas or metadata catalogs. Unlike traditional relational focused on transactional processing, Apache Drill emphasizes data exploration and analytics through its schema-optional model, which automatically infers data structures at query time. Inspired by Google's Dremel, it prioritizes flexibility for rapidly evolving datasets over rigid schema enforcement, reducing setup time and enabling self-service analytics. Common use cases include discovering insights in raw logs and nested files, as well as integrating with tools like Tableau for ad-hoc reporting across heterogeneous sources. Implemented in for cross-platform compatibility, Drill operates under the 2.0, fostering community contributions and broad adoption in environments.

Architecture

Core Components

Drill's distributed architecture relies on several fundamental components that enable its schema-free querying capabilities across diverse sources. At the heart of the system is the Drillbit, a core daemon process that runs on each node in the cluster. The Drillbit serves as the primary , responsible for handling client requests, coordinating query processing, and delivering results. It encompasses an for running query fragments and a controller for planning, with no master-slave hierarchy—any Drillbit can accept and drive queries while maximizing locality by assigning fragments to nodes closest to the . In distributed mode, Apache Drill integrates with for essential coordination. ZooKeeper manages membership, performs among Drillbits, and handles management, allowing clients to discover available nodes through a without direct knowledge of topology changes, such as node additions or failures. This integration ensures fault-tolerant operation by providing health-check information and node availability data to the driving Drillbit for efficient query distribution. The Storage Engine Plugin Framework provides a modular for interacting with various data sources. This framework defines interfaces for retrieval, read and write operations, and optimization rules, enabling dynamic loading of plugins to connect Drill to underlying storage systems like Hadoop file systems or HBase. It supports extensibility, allowing new plugins to be added without modifying the core engine, and facilitates schema-on-read interactions tailored to each data source. For efficient in-memory data handling, Apache Drill employs Value Vectors as its primary . These are columnar-oriented constructs consisting of one or more contiguous buffers that store sequences of values along with metadata, passed between operators during query execution. Value Vectors enable vectorized processing, leveraging modern CPU architectures for high-performance columnar operations, and inherently support dynamic s by allowing schema changes mid-query through the creation of new vectors in record batches. They accommodate various data types, including fixed-width, nullable, repeated, and variable-width values, using techniques like offset tables for indirection and selection vectors for subsets, ensuring constant-time and immutability post-construction. Complementing these is the Metadata Provider, a decentralized caching mechanism that derives information directly from plugins. This provider eliminates the need for a centralized metadata store by inferring and caching schemas , optimizing repeated queries through schema discovery and integration with sources like multiple metastores. Accessible via the INFORMATION_SCHEMA, it supports dynamic schema handling and enhances query planning efficiency without relying on external execution engines.

Query Processing Pipeline

Apache Drill's query processing pipeline orchestrates the transformation of user-submitted SQL queries into executable operations across a distributed , emphasizing discovery and efficient data movement without requiring predefined . This pipeline begins with query ingestion and progresses through optimization and parallel execution, leveraging storage plugins for data access as needed. The initial stage involves parsing and validation, where incoming ANSI SQL queries—supporting extensions for complex types such as arrays and maps—are processed by the Apache Calcite framework to generate a preliminary logical plan. This plan represents an abstract sequence of operators, such as scans, projections, and joins, while validating syntax and semantics against Drill's supported constructs. The parser ensures compatibility with SQL-2003 standards, converting the query into a relational algebra tree that captures the intended data flow. Following parsing, logical planning occurs via Drill's cost-based optimizer, which refines the initial logical plan into an optimized version tailored to the target sources. During this phase, schema discovery happens on-the-fly: for schema-free like or files, the optimizer probes samples from storage plugins to infer structures dynamically, enabling queries without upfront schema definitions. Rule-based transformations, such as pushdown and join reordering, are applied to minimize movement and computation, producing a high-level logical plan that balances flexibility with preliminary efficiency. This step incorporates statistics from sources where available, ensuring the plan accounts for distribution and . The optimized logical plan is then converted into a physical plan, which specifies concrete execution strategies, including operator implementations and parallelism decisions. A parallelizer divides the physical plan into major fragments—distinct phases separated by data exchange boundaries, such as sorts or —and further slices them into minor fragments for concurrent execution. Major fragments delineate stages, while minor fragments enable intra-node and inter-node parallelism, with data facilitated through network exchanges between sender and receiver operators. This fragmentation allows queries to scale horizontally across multiple nodes, distributing workload without centralized coordination beyond planning. At the core of execution lies the execution engine, which processes minor fragments in a pipelined, vectorized manner to achieve low-latency columnar operations. Data is handled using Value Vectors, specialized structures that store columnar values in contiguous buffers, supporting fixed-width, variable-width, and nullable types with metadata for efficient access. Vectorization enables operators to process batches of records simultaneously—typically 4,096 rows per —leveraging SIMD instructions and reducing overhead from row-by-row . In memory-constrained scenarios, certain operators like hash aggregates, hash joins, and external sorts support spill-to-disk, temporarily writing excess data to configurable temporary directories before resuming , thus preventing out-of-memory failures while maintaining query progress. Fragments execute as leaf (data scans), intermediate (transformations), or root (final assembly) types, with data flowing upstream through the . Finally, result delivery consolidates outputs in the root fragment, performing any remaining aggregations, projections, and formatting before streaming results back to the client. This stage ensures that only requested columns and rows are materialized, minimizing unnecessary computation and network transfer. The entire pipeline operates optimistically, scheduling all fragments at once for pipelined execution, and supports iterative refinement through tools like EXPLAIN for plan inspection.

Data Support

Back-end Storage Plugins

Apache Drill employs pluggable storage plugins to connect to a wide of back-end sources, enabling schema-free SQL queries against diverse systems without requiring movement or . These plugins are essential for Drill's distributed execution engine, allowing it to read directly from storage while optimizing query plans based on the underlying system's capabilities. For file systems, Drill provides a versatile file system storage plugin that supports distributed and local storage via the Hadoop FileSystem API, including Hadoop Distributed File System (HDFS) for scalable processing and local file systems for direct access to on-premises data. This plugin extends to cloud-based object stores, such as through a dedicated S3 storage plugin that handles AWS credentials and bucket configurations for seamless integration with cloud workloads. Similarly, the Azure Blob storage plugin leverages Hadoop-compatible layers to query data in Azure Blob Storage, supporting secure access via account keys or shared access signatures. is also accessible through the file system plugin when configured with the appropriate Hadoop GCS connector, enabling queries across multi-cloud environments. In Drill 1.22.0, support for SFTP file systems was added, allowing secure file transfers and access to remote servers via the for scenarios requiring encrypted connections. Drill's NoSQL database connectors include dedicated plugins for popular columnar and document stores. The HBase storage plugin enables columnar access to HBase tables, integrating with HBase's client libraries for efficient scans and filters pushed down to the storage layer. For document-oriented databases, the storage plugin uses the latest Java driver to read collections, supporting operations like aggregation pipelines translated to SQL. The storage plugin, introduced in Drill 1.19, allows SQL queries against tables using Apache Calcite for optimization and pushdown of predicates to 's query engine. Additionally, the MapR-DB plugin provides native support for MapR's distributed database, treating it as a binary store with schema-on-read capabilities similar to HBase. The storage plugin connects to the metastore, enabling queries on tables across various underlying storage formats like and , with support for tables since Drill 1.13. For relational databases, the RDBMS storage plugin uses JDBC drivers to query traditional SQL databases such as , , , and SQL Server, facilitating joins between relational data and sources. Drill also supports specialized plugins for streaming and time-series data. The Kafka storage plugin, introduced in Drill 1.12, allows querying Kafka topics as tables for real-time stream processing. Similarly, the OpenTSDB storage plugin enables SQL access to OpenTSDB time-series data via its . Storage plugins are configured using -formatted definitions, which specify connection parameters such as URIs, credentials, and modes, managed through the Drill Web UI, REST API, or configuration files like storage-plugins-override.conf. This structure includes attributes for enabling the plugin, defining workspaces, and setting formats, with support for credential providers to handle sensitive information securely. modes, enhanced in Drill 1.21, include plain credentials, shared user translation, and credential provider integration on a per-plugin basis. A key strength of these plugins is , which permits joining data across multiple back-ends in a single query—for instance, combining files from S3 with documents from —leveraging Drill's optimizer to push computations to the appropriate storage systems for efficiency. This capability plays a role in schema inference by allowing Drill to discover structures dynamically from heterogeneous sources during query execution.

Supported Data Formats

Apache Drill natively supports structured file formats optimized for efficient querying. files benefit from columnar storage, enabling projection pushdown that reads only required columns to minimize I/O and improve performance on large datasets. files are supported with schema evolution capabilities, allowing queries on files with evolving structures without metadata reconfiguration. Delimited formats like and TSV are handled through text , where delimiters are specified or inferred from file extensions and configurations to extract columnar dynamically. For semi-structured data, Drill provides robust handling of JSON, treating nested objects as MAP types and repeated elements as ARRAY types, which can be queried and manipulated using built-in SQL functions for accessing hierarchical fields. XML files are supported via a dedicated format plugin introduced in version 1.19, enabling schema-free reading by converting XML to a queryable structure, though capabilities are limited to basic element extraction without advanced processing. Unstructured and raw data formats are accessible for specialized analysis. PCAP files, used for network packet captures, have been supported since version 1.11 with built-in decoding functions that extract fields like timestamps, IP addresses, ports, and payloads for protocols including , , and ICMP. ESRI Shapefiles, supported since version 1.17, allow querying geospatial vector data such as points, lines, and polygons using a dedicated format . Log files can be parsed using the logfile , which applies patterns to split lines into columns, facilitating ad-hoc querying of unstructured log entries without preprocessing. Drill's schema discovery mechanism allows on-the-fly inference when querying directories containing mixed formats, automatically detecting and adapting to varying schemas across files to avoid errors from inconsistencies. This dynamic approach supports evolving data without requiring upfront metadata definitions. Complex data types are integral to Drill's querying model, with full support for arrays, maps, and unions derived from formats like , , and . Flattening operators, such as FLATTEN, enable unnesting of repeated or nested structures into relational tables for standard SQL operations. These capabilities are accessed through configured back-end storage plugins that integrate the formats into Drill's query engine.

Features and Capabilities

Schema-Free Querying

Apache Drill enables schema-free querying by performing dynamic discovery at query execution time, allowing users to access without predefined schemas or type specifications. This feature leverages self-describing formats such as and to automatically infer the structure, including nested and repeated elements, as the query processes the . For instance, Drill can query evolving datasets where fields may vary across records or files, adapting operators without requiring centralized storage. This dynamic resolution supports direct exploration of raw, multi-structured data, eliminating the need for upfront schema definition or maintenance. In practice, users can issue SQL queries against files or directories without loading data into a traditional database, as Drill derives the from the source during execution. This approach is particularly valuable for ad-hoc analysis of , where traditional relational systems would demand rigid schemas. To handle nested and complex data types inherent in schema-free environments, Drill extends standard SQL with functions tailored for hierarchical structures. The FLATTEN function, for example, expands repeated fields like arrays into individual rows while preserving associations with other columns, enabling queries like SELECT FLATTEN(employees.skills) AS skill FROM hr.; to transform nested employee skills into a flat result set. Similarly, UNNEST combined with LATERAL joins provides SQL-standard support for unnesting collections, such as arrays of maps, allowing operations like SELECT * FROM emp LATERAL VIEW UNNEST(emp.skills) AS skill WHERE skill.name = 'SQL';, which processes nested data in chunks for efficiency and handles variations across elements. These extensions facilitate intuitive access to complex types, such as SELECT t.trans_info.prod_id[0] FROM transactions. AS t;, without prior flattening. Drill's schema-free model obviates the need for Extract, Transform, Load (ETL) processes, permitting queries on raw data in its native location across sources like HDFS or . This reduces preparation time for data exploration, as users can combine disparate formats—such as files and tables—in a single query without schema alignment or preprocessing. For example, a query might join nested documents with relational data directly, bypassing traditional warehousing steps. In cases of missing or inconsistent fields, Drill employs graceful degradation, treating absent elements as null values during schema discovery to ensure query continuity rather than failure. Users can opt for strict mode via session settings if validation is required, but the default behavior prioritizes flexibility for evolving datasets. This handling supports robust exploration of dynamic structures, where fields may appear or change across records without halting execution. Since version 1.9, Drill has supported user-defined functions (UDFs) in , including dynamic registration via commands like CREATE FUNCTION my_udf USING JAR 'path/to/myudf.jar';, which allows custom processing for format-specific needs without cluster restarts. These UDFs extend schema-free capabilities by enabling tailored logic for nested or irregular data, such as parsing custom variants, and are managed through distributed file systems for multi-tenant environments.

Performance and Scalability

Apache Drill achieves high performance and scalability through its distributed execution model, which supports horizontal scaling across thousands of nodes by breaking queries into parallel fragments executed on multiple drillbits. This fragment parallelism allows queries to process large datasets efficiently, with data locality ensuring that computations occur close to the data sources to minimize network shuffle and latency. By co-locating drillbits with storage nodes, Drill reduces data movement, enabling it to handle petabyte-scale workloads interactively. The query optimizer in Drill employs rule-based transformations to restructure plans for efficiency, incorporating cost estimates based on resource usage such as I/O, CPU, and memory to guide decisions. Key optimizations include join reordering to minimize intermediate sizes and predicate pushdown to early in the execution , reducing the volume of data processed downstream. Additionally, datastore-aware rules tailor optimizations to specific back-ends, such as leveraging native indexing in or HBase, which further enhances performance by pushing computations to the storage layer where possible. Drill's vectorized execution engine processes data in columnar format using Value Vectors, which represent batches of records as contiguous memory arrays to exploit CPU instructions. This approach improves CPU efficiency by operating on multiple values simultaneously, avoiding row-by-row processing and reducing serialization overhead between operators. As a result, queries benefit from higher throughput, particularly for analytical workloads involving aggregations and scans over large datasets. Memory management in Drill includes adaptive spilling mechanisms for memory-intensive operators like sorts and hash joins, which automatically offload excess data to disk when direct memory limits are approached, preventing out-of-memory errors. Query profiles provide detailed metrics on memory usage, operator costs, and spill events, enabling administrators to tune configurations such as direct memory allocation for optimal performance. Recent releases, such as version 1.22.0, include fixes for memory statistics accuracy and enhancements to in storage plugins, improving reliability for large-scale queries. Version 1.22.0, released in June 2025, introduced the REGEXP_EXTRACT function and improved JSON empty key handling to further enhance schema-free querying and performance. Benchmarks demonstrate Drill's capability for petabyte-scale queries at interactive speeds, drawing inspiration from Google's for interactive analysis on massive s.

Interfaces and Integrations

Client Interfaces

Apache Drill provides several native client interfaces for users to interact with the system, enabling query submission, execution monitoring, and management. These interfaces support both interactive and programmatic access, allowing integration with standard tools while maintaining Drill's schema-free querying capabilities. The primary interfaces include the command-line , standard database drivers, a RESTful , and a web-based console, with options for and . The Drill Shell serves as a for interactive SQL execution, built on SQLLine, a pure-Java that follows the SQL:2011 standard with extensions for nested data. Users can start the shell in embedded mode using the command ./sqlline -u jdbc:drill:drillbit=local for local or in distributed mode via ZooKeeper connection strings like ./sqlline -u jdbc:drill:zk=<zk-hosts>:2181 to connect to a . It features auto-completion for SQL commands and keywords, configurable via the fastConnect option, and command management with commands such as !history to view past queries, supporting up to a customizable maximum number of rows. Additional customizations include prompt formatting (e.g., !set prompt good-drill) and output formats like table or CSV via !set outputFormat table. The shell is particularly useful for ad-hoc querying and testing storage plugins without requiring external tools. For broader integration with business intelligence and analytics tools, Apache Drill offers JDBC and ODBC drivers as standard connectors. The JDBC driver, distributed as drill-jdbc-all-<version>.jar within the Drill installation's jars/jdbc-driver directory or downloadable from the Apache mirrors, supports connections via URLs such as jdbc:drill:zk=<zk-quorum> and the class org.apache.drill.jdbc.Driver. It enables parameterized queries through methods like setMaxRows to limit result sets (introduced in Drill 1.16) and setQueryTimeout for execution time limits (since Drill 1.13), facilitating use with tools like Tableau, Excel, and SQuirreL SQL client by adding the JAR to the client's classpath and configuring connection parameters. The ODBC driver, installable on Windows, Linux, and macOS, creates Data Source Names (DSNs) for seamless connectivity to BI applications, with configuration options in odbc.ini or connection strings to control behaviors like query timeouts and schema handling; installation involves downloading the driver package and verifying it in the system's ODBC administrator. These drivers allow BI tools to leverage Drill's querying without custom code, supporting direct data exploration from spreadsheets or dashboards. The API provides HTTP-based programmatic access to , enabling query submission, checks, and result retrieval through endpoints exposed at the default port 8047 (e.g., http://<drill-host>:8047). Key endpoints include POST /query.json for submitting SQL queries with options like queryType=SQL, autoLimit for result capping (added in Drill 1.16), and defaultSchema (since Drill 1.18); GET /[status](/page/Status) for cluster health; and GET /profiles.json for running or completed queries. Result streaming over HTTP responses in format was enhanced in Drill 1.19 to reduce memory overhead during large data transfers, with verbose error reporting configurable via http.rest.errors.verbose=true. supports auth using base64-encoded credentials in the Authorization header (Drill 1.18+) or form-based with , making it suitable for scripting and application . The built-in Web UI offers a browser-based console accessible at http://<drill-host>:8047 (or if enabled), serving as an intuitive interface for query execution, analysis, and metrics viewing without additional setup. Users submit queries on the Query page using keyboard shortcuts like Ctrl+Enter, view execution s on the Profiles page (with admin access to all queries and user-limited views), and monitor cluster metrics such as CPU and memory usage under the Metrics tab. Authentication is optional but can be enforced via FORM-based login with username/password (integrated with ) or SPNEGO for , providing role-based views where administrators access full options including and Threads tabs. The UI is particularly valuable for non-technical users to explore data visually and troubleshoot queries interactively. Interface configurations, including and , are managed primarily through the drillbit.conf file and related overrides like drill-override.conf. For user , Plain security mode integrates with Pluggable Authentication Modules () for Linux-based validation against /etc/passwd, LDAP, or other backends, requiring users to provide credentials via clients like the or JDBC; this ensures consistent usernames and UIDs across cluster nodes. LDAP support is achieved through PAM configuration without dedicated Drill settings, while custom authenticators can extend username/password handling. for Web UI and REST API is enabled by setting SSL options in drillbit.conf, such as keystore paths, to encrypt communications. These options allow administrators to tailor access controls, with changes requiring Drillbit restarts for propagation.

Ecosystem Integrations

Apache Drill integrates seamlessly with various business intelligence (BI) and visualization tools, enabling users to query diverse data sources without schema modifications. It provides native support for Apache Superset through JDBC connections, allowing interactive dashboards and visualizations on Drill-queried data. Similarly, Apache Zeppelin supports Drill via its JDBC interpreter, facilitating notebook-based exploration and ad-hoc querying within collaborative environments. For commercial tools, Drill's ODBC and JDBC drivers enable connectivity with Tableau, where users can connect to Drillbit nodes to build live data sources and perform schema-on-read operations across file systems and databases. Power BI integrates with Drill using ODBC drivers, supporting direct querying of heterogeneous data for report generation and data modeling. In data pipeline workflows, Apache Drill connects with for ingestion and processing tasks, where NiFi's ExecuteSQL processor leverages Drill's JDBC interface to query and transform data flows from multiple sources. For , Drill's Kafka storage plugin allows SQL queries on topics, enabling real-time analysis of streaming events with support for filter pushdown and schema inference; custom plugins can extend this for advanced streaming integrations. Drill's compatibility with orchestration tools enhances automated workflows. It integrates with through JDBC operators, as demonstrated in official tutorials for building ETL directed acyclic graphs (DAGs) that execute Drill queries on scheduled intervals. Additionally, Dremio, a data lakehouse platform originally developed by the creators of Apache Drill and initially incorporating it as its core query engine, now uses its own advanced query engine to create virtual datasets, supporting federated querying and acceleration across cloud and on-premises storage. Programmatic access to Drill is available in multiple languages. In Python, the PyDrill library provides a native driver for executing SQL queries over REST or direct connections, while JayDeBeApi enables JDBC-based interactions for broader database compatibility. For Java and Scala applications, Drill's JDBC driver offers direct integration, allowing embedding of queries within Spark jobs or standalone programs for scalable data processing. Recent enhancements in Apache Drill 1.22.0 include expanded storage plugin capabilities, such as support and improved XML parsing, which bolster integrations with external systems for diverse data workflows. The REST API remains a key interface for programmatic querying, supporting asynchronous execution suitable for serverless environments like .

References

  1. [1]
    Drill Introduction
    Drill is an Apache open-source SQL query engine for Big Data exploration. Drill is designed from the ground up to support high-performance analysis.
  2. [2]
    Apache Drill is a distributed MPP query layer for self describing data
    Apache Drill is a distributed MPP query layer that supports SQL and alternative query languages against NoSQL and Hadoop data storage systems. It was inspired ...<|control11|><|separator|>
  3. [3]
    Apache Drill - Schema-free SQL for Hadoop, NoSQL and Cloud ...
    Drill is the only columnar query engine that supports complex data. It features an in-memory shredded columnar representation for complex data which allows ...Download · Documentation · FAQ · SQL Reference
  4. [4]
    Drill - Confluence Mobile - Apache Software Foundation
    Drill is inspired by Google's Dremel. Google has published a paper highlighting Dremel's innovative nested column-based data format and execution engine.Missing: origins | Show results with:origins
  5. [5]
    Architecture Introduction - Apache Drill
    Inspired by Google's Dremel, Drill is designed to scale to several thousands of nodes and query petabytes of data at interactive speeds that BI/Analytics ...Missing: origins | Show results with:origins
  6. [6]
    Google's Mind-Blowing Big-Data Tool Grows Open Source Twin
    Aug 21, 2012 · MapR has launched an open source project called Drill, which seeks to mimic a shocking effectively data-analysis tool built by Google.
  7. [7]
    The Apache Software Foundation Announces Apache™ Drill™ as a ...
    Dec 2, 2014 · ... Drill entered the Apache Incubator in August 2012. The project currently has code contributions from individual committers representing MapR ...Missing: initial | Show results with:initial
  8. [8]
    Apache Drill Graduates to a Top-Level Project
    Dec 2, 2014 · (One little known fact is that the name “Drill” was actually suggested by Google engineers due to its inspiration from Google's Dremel execution ...Missing: history origins
  9. [9]
    Apache Drill - Wikipedia
    Apache Drill is an open-source software framework that supports data-intensive distributed applications for interactive analysis of large-scale datasets.
  10. [10]
    HPE Makes AI, Analytics Play With MapR Acquisition - CRN
    The company takes advantage of multiple analytics and open source tools including Apache Hadoop, Apache Hbase, Apache Sparc, Apache Drill, and ...
  11. [11]
    The Apache Software Foundation Announces Apache Drill 1.0
    May 19, 2015 · Thousands of users adopt Open Source, enterprise-grade, schema-free SQL query engine for Apache Hadoop®, NoSQL and Cloud storage.
  12. [12]
    Apache Drill 1.9.0 Release Notes
    Nov 29, 2016 · This release of Drill provides the following new features: Asynchronous Parquet reader; Parquet filter pushdown; Dynamic UDF support; HTTPD ...Missing: 2018 | Show results with:2018
  13. [13]
    Apache Drill 1.11.0 Release Notes
    Jul 31, 2017 · Cryptography-related functions. (DRILL-5634); Spill to disk for the hash aggregate operator. (DRILL-5457); Format plugin support for PCAP files.Missing: 2019 | Show results with:2019
  14. [14]
    Apache Drill 1.22.0 Release Notes
    Apache Drill 1.22.0 Release Notes. Release date: Jun 28, 2025 ... Today, we're happy to announce the availability of Drill 1.22.0. You can download it here.
  15. [15]
    Licenses - The Apache Software Foundation
    The Apache Software Foundation uses licenses such as Apache License 2.0, 1.1, and 1.0, and Contributor License Agreements (CLAs).Apache License, Version 2.0 · ASF Export Classifications · Apache Licensing
  16. [16]
    What is Apache Drill? - Dremio
    Developed by Apache Software Foundation, Apache Drill was first released in 2014. It stemmed from Google's Dremel system and was developed to provide a scalable ...Missing: origins | Show results with:origins
  17. [17]
    Architecture - Apache Drill
    The goal for Drill is to bring the SQL ecosystem and performance of the relational systems to Hadoop scale data WITHOUT compromising on the flexibility of ...Query Flow In Drill · Core Modules Within A... · Flexibility
  18. [18]
    Why Drill - Apache Drill
    Drill is the world's first and only distributed SQL engine that doesn't require schemas. It shares the same schema-free JSON model as MongoDB and Elasticsearch.1. Get Started In Minutes · 2. Schema-Free Json Model · 4. Real Sql - Not...
  19. [19]
    apache-drill - Homebrew Formulae
    brew install apache-drill. Schema-free SQL Query Engine for Hadoop, NoSQL and Cloud Storage. https://drill.apache.org. License: Apache-2.0.Missing: implementation | Show results with:implementation
  20. [20]
    Core Modules - Apache Drill
    Drill serves as a query layer on top of several data sources. Storage plugins in Drill represent the abstractions that Drill uses to interact with the data ...
  21. [21]
    Value Vectors - Apache Drill
    This document defines the data structures required for passing sequences of columnar data between Operators.
  22. [22]
    Drill Query Execution - Apache Drill
    A Drillbit is the process running on each active Drill node that coordinates, plans, and executes queries, as well as distributes query work across the cluster.Missing: pipeline | Show results with:pipeline
  23. [23]
    Drill Plan Syntax - Apache Drill
    Drill converts queries to a Logical Plan, then optimizes it to a Physical Plan, which is the execution plan. Both are represented in JSON.
  24. [24]
    Query Plans - Apache Drill
    Drill creates logical, physical, and execution plans based on the available statistics for an associated set of files or data sources.
  25. [25]
    Sort-Based and Hash-Based Memory-Constrained Operators
    Drill supports the following memory-intensive operators, which can temporarily spill data to disk if they run out of memory: External Sort; Hash-Join ...
  26. [26]
    Configuring Storage Plugins - Apache Drill
    Storage plugins enable Drill to access data sources. Drill provides a default set of storage plugin configurations upon installation.
  27. [27]
    File System Storage Plugin - Apache Drill
    You can register a storage plugin configuration that connects Drill to a local file system or to a distributed file system registered in the Hadoop core-site.
  28. [28]
    S3 Storage Plugin - Apache Drill
    The Storage page in the Drill Web UI provides an S3 storage plugin that you configure to connect Drill to the S3 distributed file system registered in core-site ...Providing Aws Credentials · Configuring The S3 Storage... · Using An External Provider...<|control11|><|separator|>
  29. [29]
    Azure Blob Storage Plugin - Apache Drill
    Drill works well with Azure Blob Storage thanks to the Hadoop compatible layer that exists and make Azure Blob Storage usable by any tool that supports HDFS ...
  30. [30]
    Frequently Asked Questions - Apache Drill
    Apache Drill is a distributed SQL engine for exploring non-relational data like Hadoop, NoSQL, and cloud storage, without needing schemas.Missing: Dremio 2020
  31. [31]
    Data Sources and File Formats Introduction - Apache Drill
    Drill supports sources like HBase, Hive, MapR-DB, and file systems. Input formats include Avro, CSV, TSV, Parquet, and more. Schemas are strong or weak.
  32. [32]
    MongoDB Storage Plugin - Apache Drill
    A mongodb storage plugin to connect to MongoDB using MongoDB's latest Java driver. You can run queries to read, but not write, Mongo data using Drill.Configuring Mongodb · Querying Mongodb · Example Queries
  33. [33]
    Cassandra Storage Plugin - Apache Drill
    Drill's Cassandra storage plugin allows you to execute SQL queries against Cassandra tables. This storage plugin implementation is based on Apache Calcite ...
  34. [34]
    Drill 1.19 Released - Apache Drill
    Jun 10, 2021 · You can download it here. This release provides the following new Features: DRILL-92 - Cassandra Storage Plugin; DRILL-3637 - Elasticsearch ...
  35. [35]
    Plugin Configuration Basics - Apache Drill
    You can configure storage plugins in the Drill Web UI, using REST API, or through configuration files. See Configuring Storage Plugins for more information.Storage Plugin Attributes · List Of Attributes And... · Table Function Parameters
  36. [36]
    Storage Plugin Registration - Apache Drill
    To register a new storage plugin configuration, enter a storage name, click CREATE, provide a configuration in JSON format, and click UPDATE.
  37. [37]
    Storage plugin authentication modes - Apache Drill
    Drill 1.21 brings with it the ability to configure storage authentication modes on a per-plugin basis. Three authentication modes are provided.Credential Provider Support... · Shared_user Mode Examples · Using The Plain Credentials...
  38. [38]
    Parquet Format - Apache Drill
    Optimized for working with large files, Parquet arranges data in columns, putting related values in close proximity to each other to optimize query performance, ...Writing Parquet Files · Ctas Query · About Int96 SupportMissing: projection | Show results with:projection
  39. [39]
    Querying Avro Files - Apache Drill
    Drill supports files in the Avro format. Starting from Drill 1.18, the Avro format supports the Schema provisioning feature.
  40. [40]
    Text Files: CSV, TSV, PSV - Apache Drill
    Drill reads CSV, TSV, and PSV files into a list of VARCHARS, rather than individual columns. If your text files have headers, you can enable extractHeader and ...Select Data From Particular... · Use A Distributed File... · Strategies For Using...Missing: detection | Show results with:detection
  41. [41]
    JSON Data Model - Apache Drill
    Example: Access a Map Field in an Array. To access a map field in an array, use dot notation to drill down through the hierarchy of the JSON data to the field.Data Type Mapping · Example: Flatten And... · Limitations And Workarounds
  42. [42]
    XML Format Plugin - Apache Drill
    XML Format Plugin Introduced in release: 1.19 This plugin enables Drill to read XML files without defining any kind of schema.
  43. [43]
    [DRILL-5432] Added pcap-format support - ASF Jira
    Below is a merged summary of the PCAP support in Apache Drill, consolidating all information from the provided segments into a single, comprehensive response. To maximize detail and clarity, I’ve organized key information into tables where appropriate, while retaining narrative summaries for context. All useful URLs are included at the end.
  44. [44]
    Logfile Plugin - Apache Drill
    regex : This is the regular expression which defines how the log file lines will be split. You must enclose the parts of the regex in grouping parentheses that ...Missing: patterns | Show results with:patterns
  45. [45]
    Supported Data Types - Apache Drill
    Dec 30, 2015 · Drill uses the vardecimal data type to store decimal and numeric data types in a compressed format that optimizes storage space.
  46. [46]
    Why Drill - Apache Drill
    Drill is the world's first and only distributed SQL engine that doesn't require schemas. It shares the same schema-free JSON model as MongoDB and Elasticsearch.Missing: licensing | Show results with:licensing
  47. [47]
    SQL Extensions - Apache Drill
    Drill provides the following functions for analyzing nested data. Function, SQL, Drill. FLATTEN, None, Separates the elements in nested data from a repeated ...Missing: UNNEST | Show results with:UNNEST
  48. [48]
    FLATTEN - Apache Drill
    The FLATTEN function is useful for flexible exploration of repeated data. To maintain the association between each flattened value and the other fields in the ...
  49. [49]
    LATERAL Join - Apache Drill
    UNNEST and LATERAL work similarly to FLATTEN, but differ from FLATTEN in the follow ways: - UNNEST is a SQL standard, whereas FLATTEN is not. - FLATTEN is ...
  50. [50]
    Querying Complex Data Introduction - Apache Drill
    Apache Drill queries do not require prior knowledge of the actual data you are trying to access, regardless of its source system or its schema and data ...Missing: ETL direct
  51. [51]
    CREATE OR REPLACE SCHEMA - Apache Drill
    Drill is unique in that it infers table schema at runtime. However, sometimes schema inference can fail when Drill cannot infer the correct types. For example, ...Related Properties · Usage Notes · ExamplesMissing: discovery fly mixed
  52. [52]
    Dynamic UDFs - Apache Drill
    The Dynamic UDF feature enables users to register and unregister UDFs on their own using the CREATE FUNCTION USING JAR and DROP FUNCTION USING JAR commands.
  53. [53]
    Develop Custom Functions Introduction - Apache Drill
    As of Drill 1.9, there are two methods for adding custom functions to Drill. Administrators can manually add custom functions to Drill, or users can issue the ...
  54. [54]
    Performance - Apache Drill
    Drill execution uses a pipeline model where all tasks are scheduled at once. The query execution happens in- memory as much as possible to move data through ...
  55. [55]
    Query Profiles - Apache Drill
    Look for the icon in the Avg Peak Memory and Max Peak Memory columns to find the operators that spilled data to disk. See Sort-Based and Hash-Based Memory ...Viewing A Query Profile · Fragment Profiles · Operator ProfilesMissing: adaptive | Show results with:adaptive
  56. [56]
    SQL-on-Hadoop: Impala vs Drill - Rittman Mead
    Apr 19, 2017 · In this post I'll look in detail at two of the most relevant: Cloudera Impala and Apache Drill. We'll see details of each technology, define the similarities, ...Missing: benchmarks petabyte-
  57. [57]
    Configuring the Drill Shell - Apache Drill
    The default outputFormat is “table.” You can set the outputFormat to a more performant table formatting, such as csv, as shown in the examples. SQLLine ...Sqlline Connection... · Drill Shell Commands · Set Command VariablesMissing: detection | Show results with:detection
  58. [58]
    Using the JDBC Driver - Apache Drill
    This section explains how to install and use the JDBC driver for Apache Drill. To use the JDBC driver, you have to: Meet prerequisites. Get the Drill JDBC ...
  59. [59]
    Installing the ODBC Driver - Apache Drill
    Installing the ODBC Driver · Installing the Driver on Linux · Installing the Driver on Mac OS X · Installing the Driver on Windows · ← Using JDBC with SQuirreL ...
  60. [60]
    ODBC Configuration Reference - Apache Drill
    You can use various configuration options to control the behavior of the Drill ODBC Driver. You can use these options in a connection string or in the odbc.ini ...
  61. [61]
    REST API Introduction - Apache Drill
    Nov 18, 2015 · The Drill REST API provides programmatic access to Drill through the Web UI. Using HTTP requests, you can run queries, perform storage plugin tasks.
  62. [62]
    Starting the Web UI - Apache Drill
    The Drill Web UI is one of several client interfaces that you can use to access Drill. You can perform the following activities from the Drill Web UI.
  63. [63]
    Configuring Plain Security - Apache Drill
    When Plain user authentication is enabled with PAM, each user that accesses the Drillbit process through a client, such as SQLLine, must provide username ...
  64. [64]
    Configuring Web UI and REST API Security - Apache Drill
    Drill 1.5 extends Drill user security to the Web UI and underlying REST API. As administrator, you can control the extent of access to the Web UI and REST API ...
  65. [65]
    Visualize Anything with Superset and Drill - The Dataist
    Mar 25, 2019 · One of my long standing projects is to get Apache Drill to work with various open source visualization and data flow tools.
  66. [66]
    Using Drill with BI Tools - Apache Drill
    Using Drill with BI Tools Introduction · Tableau Examples · Using MicroStrategy Analytics with Apache Drill · Using Tibco Spotfire Desktop with Drill · Configuring ...
  67. [67]
    Using Apache Drill with Tableau 10.2
    This document describes how to connect Tableau 10.2 to Apache Drill and instantly explore multiple data formats from various data sources.Missing: integration Superset Zeppelin
  68. [68]
    Not able to connect Apache drill/dremio with Nifi 1.8.0 - Stack Overflow
    Feb 12, 2019 · I am not able to connect with apache drill with nifi-1.8.0 but able to connect with nifi-1.7.1. I am using ExecuteSql processor to connect with ...
  69. [69]
    Kafka Storage Plugin - Apache Drill
    The Kafka storage plugin enables you to run SQL queries on Apache Kafka and perform interactive analysis on the data. When you install Drill, a preconfigured ...Kafka Support · Filter Pushdown Support · System|session OptionsMissing: integration NiFi Dremio
  70. [70]
    Orchestrating queries with Airflow - Apache Drill
    This tutorial walks through the development of an Apache Airflow DAG that implements a basic ETL process using Apache Drill.Explore The Source Data · Develop A Ctas (create Table... · Develop An Airflow Dag
  71. [71]
    Orchestration of Dremio with Airflow and CRON Jobs
    Oct 8, 2024 · This blog will explore how orchestration plays a crucial role in automating and managing data processes, using tools like CRON and Apache Airflow.What Is Dremio And... · Try Dremio's Interactive... · What Is Airflow And Its...
  72. [72]
    PythonicNinja/pydrill: Python Driver for Apache Drill. - GitHub
    Python Driver for Apache Drill. Schema-free SQL Query Engine for Hadoop, NoSQL and Cloud Storage. Free software: MIT license; Documentation: https://pydrill.
  73. [73]
    REST API - Apache Drill
    Documentation · Getting Started · Architecture · Tutorials · Drill-on-YARN · Install ... Using Saiku Analytics with Apache Drill · Docs · Developer Information ...