Nginx
Nginx (stylized as NGINX, pronounced "engine-x") is an open-source software application that functions primarily as an HTTP web server, reverse proxy server, content cache, load balancer, TCP/UDP proxy server, and mail proxy server (supporting IMAP, POP3, and SMTP protocols).[1] Developed with a focus on high performance and low resource utilization, it employs an event-driven, asynchronous architecture using scalable non-blocking I/O, allowing it to handle thousands of concurrent connections efficiently on a single server; for instance, it can manage 10,000 inactive HTTP keep-alive connections using only about 2.5 MB of memory.[1] Released under the 2-clause BSD License, Nginx is known for its flexibility, modular configuration, and ability to perform zero-downtime upgrades and reconfigurations.[1] Originally created by Russian software engineer Igor Sysoev to address the C10K problem—the challenge of handling 10,000 concurrent connections on a single web server—Nginx was first publicly released on October 4, 2004.[2] Sysoev developed it while working at Rambler Media, a Russian search engine, where it proved effective in managing high traffic loads.[2] Over the years, Nginx has evolved into a cornerstone of modern web infrastructure, supporting features like HTTP/2, HTTP/3, SSL/TLS termination, and API gateway capabilities in its commercial variants.[1] By the end of 2019, Nginx was powering more than 475 million websites worldwide, and in May 2021, it surpassed Apache HTTP Server to become the most widely used web server globally.[3] As of November 2025, it holds a 33.2% market share of websites with known web servers, according to W3Techs.[4] This widespread adoption stems from its efficiency in diverse environments, including cloud-native applications, microservices, and Kubernetes deployments, where it excels in traffic management and security.[5] In 2019, the open-source project was acquired by F5, Inc., leading to the development of NGINX Plus—a commercial edition offering advanced features like enhanced monitoring, API management, and enterprise support—while the core open-source version remains freely available and actively maintained.[6] Today, Nginx continues to drive innovation in application delivery, with ongoing updates supporting platforms from FreeBSD and Linux to Windows and macOS.[1]Introduction
Definition and Purpose
Nginx is an open-source software that functions as an HTTP web server, reverse proxy server, load balancer, content cache, TCP/UDP proxy server, and mail proxy server.[7] It was created by Russian software engineer Igor Sysoev in 2004 specifically to address the C10k problem, which involves efficiently handling up to 10,000 concurrent connections on a single server.[2] This design enables Nginx to manage high volumes of traffic with minimal resource consumption, making it suitable for demanding web environments. The primary purposes of Nginx include serving static web content with high efficiency, acting as a reverse proxy to forward requests to backend dynamic applications, and providing load balancing to distribute traffic across multiple servers for high-traffic websites.[8] Nginx employs an event-driven architecture that allows it to process multiple requests asynchronously without blocking, supporting scalable performance.[9] Nginx is distributed under the 2-clause BSD license, ensuring its core remains free and open-source for broad adoption.[10] Additionally, NGINX Plus serves as a commercial variant, offering enhanced features, enterprise support, and advanced modules while building on the open-source foundation.[11]Popularity and Adoption
Nginx commands a significant market share in the web server landscape, utilized by 33.2% of all websites with a known web server as of November 20, 2025, outpacing Apache's 25.0%.[12] This positioning reflects its robust growth, with Netcraft surveys showing Nginx achieving gains in total sites throughout 2025; for example, a 22.8 million site increase in September (to 24.9% share across surveyed domains) and 4.6 million in October (to 24.96% share), while active sites gained 1.45 million in October (to 17.82% share). Note that W3Techs measures usage on active websites, while Netcraft surveys total and active sites/domains, leading to differing share estimates.[13][14] The software's adoption has surged among high-traffic organizations, including Netflix for streaming delivery, Autodesk for software distribution, and NASA for mission-critical web services, where it handles millions of concurrent connections efficiently.[15] Its popularity extends to containerized environments like Docker and Kubernetes, with 42% of organizations running workloads on containers and 24% using Kubernetes for orchestration, often integrating Nginx as an ingress controller; similarly, it thrives on cloud platforms such as AWS Elastic Kubernetes Service and Google Cloud Run.[16][17][18] Key drivers of Nginx's widespread use include its lightweight resource footprint, which minimizes memory and CPU demands compared to thread-based alternatives, and its scalability for handling high volumes of concurrent connections through an event-driven model. These attributes facilitate seamless integration with modern microservices architectures and platform engineering practices, adopted by 65% of surveyed organizations in 2025. According to the F5 2025 NGINX Annual Survey (October 2025), Nginx is increasingly used in AI infrastructure as a default front door, with 25% of respondents applying agentic AI for configuration.[16] As of 2025, Nginx maintains dominance among the top 1 million websites, powering a substantial portion of high-profile domains while capturing preference in approximately 65% of new web server deployments, underscoring a shift toward performance-oriented solutions.[19][20]History
Early Development (2000s)
Nginx was conceived in 2002 by Igor Sysoev, a Russian software engineer working as a systems administrator at Rambler Media, one of Russia's leading internet search engines at the time. Sysoev developed the software to overcome the performance limitations of Apache, which struggled with high concurrency and traffic spikes on Rambler's platform.[21][22] The primary motivation was addressing the C10k problem—the challenge of efficiently managing at least 10,000 simultaneous connections—which Apache's process-per-connection model could not handle without significant resource overhead.[2] To solve this, Sysoev implemented an asynchronous, non-blocking I/O model that allowed a single thread to manage multiple connections efficiently, drawing on event-driven programming techniques. This approach was rigorously tested on high-load Russian websites, including Rambler, where it demonstrated superior scalability compared to traditional servers.[21][23] The initial development focused on creating a lightweight HTTP server capable of serving static content under extreme loads, prioritizing low memory usage and high throughput.[2] The first public release, version 0.1.0, occurred on October 4, 2004, marking Nginx's debut as an open-source project under a BSD-like license. Early adopters in Russia quickly recognized its efficiency for static file delivery. In 2005, version 0.2.0 enhanced the software by adding full HTTP/1.1 protocol support, enabling better compliance with web standards and improved handling of persistent connections.[24][2] A significant milestone came with version 0.5.0, released on December 4, 2006, which introduced basic load balancing features in the upstream module, including the ip_hash directive and server parameters such as max_fails and fail_timeout, expanding Nginx's utility beyond static serving to dynamic content acceleration and traffic distribution across backend servers. These additions solidified its role in high-traffic environments, with continued testing on demanding Russian internet properties validating its reliability.[24][22][25]Expansion and Commercialization (2010s)
During the early 2010s, Nginx experienced rapid adoption as a high-performance web server and proxy solution, with W3Techs reporting that it powered 6.8% of the top 1 million websites by Alexa rankings in April 2011.[26] This growth was bolstered by the release of version 1.0.0 on April 12, 2011, marking the first stable version of the software after years of development, and including refined HTTP proxy capabilities for reverse proxying and load balancing.[27] The stable release solidified Nginx's reliability for production environments, contributing to its appeal among developers and operators handling high-traffic applications. In July 2011, Igor Sysoev, Nginx's creator, co-founded Nginx, Inc. alongside Maxim Konovalov and Andrew Alexeev to provide commercial support, training, and enterprise-grade enhancements for the open-source project.[3] This shift enabled dedicated resources for accelerating development and addressing the growing demand from businesses. The company launched NGINX Plus in August 2013 as its first commercial product, offering advanced features beyond the open-source version, such as enhanced load balancing, application firewall capabilities, and later integrations like JSON Web Token (JWT) authentication introduced in release R10 in 2016.[2][28] Key open-source releases further drove Nginx's expansion, including version 1.9.5 in September 2015, which introduced stable support for HTTP/2 to improve multiplexing and performance over persistent connections.[29] In April 2017, version 1.13.0 added dynamic module loading, allowing administrators to extend functionality at runtime without recompiling the server, which simplified customization and third-party integrations.[30] The 2010s also saw significant community and ecosystem growth, exemplified by OpenResty, a distribution of Nginx that integrates the Lua scripting language for dynamic content handling and was first developed in 2009 by Yichun "agentzh" Zhang at Yahoo! China.[31] This integration enabled powerful extensions like inline scripting for APIs and edge computing, fostering adoption by high-traffic platforms such as Cloudflare, which leveraged Nginx with Lua for its content delivery network in the early 2010s.[32] By the end of the decade, Nginx powered a substantial portion of the internet's busiest sites, culminating in its acquisition by F5 Networks in May 2019 for $670 million to enhance multi-cloud application delivery.[33]Modern Enhancements (2020s)
In the early 2020s, following F5's acquisition of Nginx Inc. in May 2019, Nginx was integrated into F5's broader application delivery portfolio, enhancing multi-cloud capabilities for application services across hybrid environments.[33] This integration allowed Nginx to leverage F5's infrastructure for improved scalability in modern deployments, while maintaining its open-source roots.[6] Key releases in the decade advanced Nginx's protocol support and scripting features. The 1.25.0 mainline version, released on May 23, 2023, introduced experimental support for HTTP/3 via the QUIC transport protocol, enabling faster and more reliable web connections over UDP.[34] Building on this, the 1.27.0 version, released on May 29, 2024, included enhancements to QUIC handling, such as improved processing of QUIC sessions and bug fixes for HTTP/3 stability.[30] Most recently, the 1.29.3 mainline version, released on October 28, 2025, incorporated njs 0.9.4, which added HTTP forward proxy support for the ngx.fetch() API in both HTTP and stream modules, alongside memory consumption optimizations to reduce resource usage in scripting scenarios.[35] Nginx adapted to containerized and cloud-native environments with enhanced support for orchestration platforms. In 2023, the NGINX Gateway Fabric project emerged as an open-source implementation of the Kubernetes Gateway API, using Nginx as the data plane to manage ingress traffic more flexibly than traditional Ingress controllers, supporting hybrid and multi-cloud Kubernetes clusters.[36] This development aligned with a growing emphasis on edge computing, where Nginx serves as a lightweight API gateway for low-latency processing at distributed network edges, handling tasks like rate limiting and JWT validation in dynamic infrastructures.[37] Security and performance received ongoing attention through regular patches addressing vulnerabilities. For instance, version 1.27.4, released on February 5, 2025, fixed a critical issue in TLSv1.3 virtual server handling that could allow unauthorized session resumption across SNI configurations (CVE-2025-23419), bolstering protection against certificate bypass attacks.[30] These updates, combined with routine HTTP/3 refinements, ensured Nginx's robustness in high-traffic, threat-prone settings.[38]Architecture
Core Components
Nginx operates through a multi-process architecture designed for efficiency and reliability. At its core is a single master process that serves as the parent supervisor. This process reads and evaluates the configuration files upon startup, binds to the specified listening sockets, and spawns the necessary worker processes. It also monitors the workers, handles signals for operations such as reloading the configuration or graceful shutdowns, and facilitates restarts without interrupting service.[39] The worker processes are the primary handlers of client requests, with their number tunable via theworker_processes directive in the configuration file, often set to match the number of CPU cores for optimal performance. Each worker process operates independently, using an event-driven model to manage multiple connections concurrently without blocking, thereby enabling high concurrency. These processes perform the actual work of processing incoming requests, such as serving static files or proxying to upstream servers.[39][40]
If proxy caching is enabled, additional dedicated processes support cache management. The cache loader process activates once at startup to scan the disk cache and populate the in-memory metadata in the shared memory zone, ensuring quick access to cached content. Complementing this, the cache manager process runs periodically to evict expired or least-recently-used items from the cache, maintaining its size within configured limits and preventing disk overflow.[41]
Nginx employs a modular design, where the core binary provides foundational functionality such as process management and event handling, while loadable modules extend capabilities for specific protocols. Core modules include those for HTTP processing, mail proxying, and stream (TCP/UDP) handling, integrated at compile time or loaded dynamically. Unlike some servers, Nginx lacks built-in scripting support in its core, but its architecture allows extensibility through third-party modules, such as those in OpenResty, which add features like Lua scripting without altering the binary.[42][43]
Event-Driven Model
Nginx employs an asynchronous, event-driven architecture that enables efficient handling of concurrent connections without blocking operations. This model relies on non-blocking I/O operations, where the server does not wait for slow network events but instead registers them and proceeds to other tasks. Worker processes, which are single-threaded, utilize operating system mechanisms such as epoll on Linux, kqueue on BSD systems, or select as a fallback for I/O multiplexing. These mechanisms allow a single worker to monitor multiple file descriptors simultaneously for readiness events like incoming data or connection closures, facilitating the management of thousands of connections per worker.[44][45][42] At the core of this architecture is the event loop within each worker process, which continuously polls for events using the aforementioned multiplexing methods and dispatches them to appropriate handlers. Incoming requests are processed in a series of sequential phases, such as post-read (for initial header processing), pre-access (for preliminary checks like rate limiting), access (for authorization), and post-access, among others. Handlers in these phases can suspend processing by returning specific codes (e.g., NGX_AGAIN for asynchronous continuation), allowing the event loop to resume later without blocking the worker. This phased approach ensures that resource-intensive or delayed operations, like disk I/O or upstream communication, do not halt progress on other connections. The master process oversees worker creation and configuration reloading but does not directly participate in request handling.[42][45][44] This design contrasts sharply with traditional thread-per-request models, such as those in Apache, where each connection spawns a new thread or process, leading to high overhead from context switching and memory allocation. Instead, Nginx reuses existing connections and workers, enabling scalability to handle the C10k problem—supporting 10,000 or more simultaneous connections—efficiently on multi-core systems by distributing load across multiple workers. On modern hardware, this allows for hundreds of thousands of concurrent connections with minimal resource consumption. Inter-process communication, including for tasks like load balancing, is facilitated through shared memory zones, which use a slab allocator and mutexes to store data such as session states or cache metadata accessible by all workers.[45][44][46]Features
Web Server and HTTP Proxy Capabilities
Nginx functions as a high-performance web server capable of efficiently serving static content such as HTML files, images, and other assets directly from the filesystem. It utilizes theroot and alias directives in the ngx_http_core_module to map request URIs to file paths, enabling direct delivery of files with optimizations like the sendfile directive, which leverages the operating system's sendfile() system call for low-overhead transfers.[47][48] This approach minimizes CPU usage and supports asynchronous I/O for concurrent handling of multiple requests.
To enhance delivery efficiency, Nginx integrates compression and dynamic content features for static files. The ngx_http_gzip_module enables on-the-fly gzip compression of responses, configurable via the gzip on; directive, which typically reduces data size by 50% or more for compressible content like text and HTML, applied based on MIME types such as text/html and minimum response lengths.[49] Additionally, the ngx_http_ssi_module processes Server-Side Includes (SSI) in responses when enabled with ssi on;, supporting commands like include, echo, and conditional if statements to embed dynamic elements into static pages.[50] For directory browsing, the ngx_http_autoindex_module generates formatted listings (e.g., HTML, JSON) of directory contents via autoindex on;, including file sizes and modification times, when no index file is present.[51]
As an HTTP proxy, Nginx excels in reverse proxy configurations, forwarding client requests to upstream servers such as Node.js applications or PHP-FPM backends using the proxy_pass directive in the ngx_http_proxy_module.[52] This setup allows Nginx to act as a front-end gateway, handling incoming traffic while delegating dynamic processing to specialized servers, with customizable headers via proxy_set_header to preserve information like the original Host. Forward proxy functionality, which routes outbound requests on behalf of clients, is supported through the njs JavaScript module, with HTTP forward proxy enhancements added for the ngx.fetch() API in njs version 0.9.4 released on October 28, 2025.[35]
Nginx supports a range of HTTP protocols to ensure compatibility and performance. It handles HTTP/1.1 by default in proxy contexts via the proxy_http_version directive, enabling features like keepalive connections.[53] HTTP/2 support was introduced in version 1.9.5 in 2015 through the ngx_http_v2_module, allowing multiplexing and header compression on a per-server basis.[30] HTTP/3, based on QUIC over UDP, became stable in version 1.25.0 released in 2023, providing improved latency and reliability for modern web applications.[54] For secure communications, Nginx performs TLS/SSL termination using the ngx_http_ssl_module, which requires OpenSSL and supports directives like ssl_certificate for certificate management, while Server Name Indication (SNI) enables hosting multiple SSL virtual hosts on a single IP address by selecting certificates based on the requested domain.[55][56]
Nginx provides essential controls for managing HTTP traffic, including rate limiting, access restrictions, and detailed logging. The ngx_http_limit_req_module implements leaky bucket rate limiting per key (e.g., client IP), configurable with zones like limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;, to prevent abuse by delaying or rejecting excess requests.[57] Access controls are enforced via the ngx_http_access_module's allow and deny directives, which filter requests by IP address or CIDR ranges in sequential order, such as denying specific hosts while permitting networks like 192.168.1.0/24.[58] For monitoring, the ngx_http_log_module records HTTP requests in customizable formats using access_log, supporting variables like $remote_addr and $status, with options for buffering, compression, and conditional logging to track traffic patterns and errors.[59] These features leverage Nginx's event-driven architecture for non-blocking operation, ensuring scalability under high load.[60]
Load Balancing and Caching
Nginx employs load balancing to distribute incoming HTTP requests across multiple backend servers, enhancing scalability and reliability. The default method is round-robin, which sequentially directs requests to each server in an upstream group, taking into account server weights for proportional distribution.[61] Other methods include least connections, which routes requests to the server with the fewest active connections to optimize resource utilization; IP hash, which hashes the client's IP address to ensure consistent routing for session affinity; and generic hash, which uses a configurable key such as a URI or header for deterministic distribution.[61] These methods support failover by automatically excluding unhealthy servers from the rotation.[61] Health checks in Nginx actively monitor upstream servers by sending periodic probes, marking servers as unavailable if they exceed a threshold of failures within a specified timeout, thereby enabling seamless failover to healthy alternatives.[62] The upstream module facilitates this by defining server groups via theupstream directive, where individual servers can be assigned weights to influence load distribution—higher weights direct more traffic to capable servers.[63] Parameters like max_fails set the number of consecutive failures before a server is deemed down (default: 1), and fail_timeout defines the duration of unavailability following those failures (default: 10 seconds), allowing for dynamic scaling as server conditions change.[63] In NGINX Plus, advanced session persistence options such as sticky cookies, routes, and learning from request headers further refine load balancing by maintaining user sessions on the same server.[61]
For performance optimization, Nginx implements proxy caching to store HTTP responses from upstream servers, reducing latency and backend load. Caches can be disk-based, storing full responses as files in a designated directory, or memory-based, holding metadata in a shared zone for rapid lookups, with automatic eviction of least recently used items when size limits are reached.[41] Cache validation ensures responses remain fresh using directives like proxy_cache_valid to set expiration times based on status codes, while mechanisms such as proxy_cache_min_uses require multiple hits before caching to avoid transient content.[41]
Cache purging allows selective invalidation of stored items via the HTTP PURGE method, restricted to authorized clients through access controls, preventing unauthorized cache manipulation.[41] Additionally, stale-while-revalidate supports serving slightly outdated content while asynchronously fetching updates, balancing freshness with availability during backend delays.[41] In NGINX Plus, the REST API enables programmatic cache management, including purging and configuration adjustments, with enhancements in recent releases improving session persistence integration for more robust traffic handling.[64]
Mail and Stream Proxy Features
Nginx offers robust mail proxying capabilities for the IMAP, POP3, and SMTP protocols, enabling it to act as an intermediary between clients and backend mail servers.[65] This functionality requires compilation with the--with-mail and --with-mail_ssl_module options and is configured within a top-level mail context.[65] Key features include support for SASL authentication mechanisms such as LOGIN, PLAIN, and CRAM-MD5, typically integrated with an external HTTP authentication server that returns the appropriate upstream server details.[65] Additionally, STARTTLS is supported to secure connections, activated via the starttls on; directive in server blocks.[65] Proxying to backend servers occurs through proxy_pass directives, with load distribution options based on client IP addresses or other rules, and error messages from backend authentication can be passed to clients if configured.[66] For SMTP specifically, the proxy_smtp_auth directive enables AUTH command proxying, while XCLIENT extensions allow passing client parameters to backends for logging or rate limiting.[66]
The stream module extends Nginx's proxying to non-HTTP traffic, introduced in version 1.9.0 to handle TCP streams, with UDP support added in version 1.9.13 and compatibility for UNIX-domain sockets.[67] This module enables load balancing and proxying for arbitrary protocols, such as databases like MySQL or LDAP, VoIP applications including RTMP, and services like DNS or syslog.[68] Configuration uses stream blocks with server directives, where proxy_pass routes traffic to upstream groups supporting methods like round-robin (default), least connections, or hash-based distribution.[68] Notable features include SSL/TLS termination or passthrough via the proxy_ssl directive for encrypted connections, access controls through allow and deny rules to restrict based on client IP, and integrated logging for monitoring stream activity.[67] Timeouts for connections and operations are tunable with directives like proxy_connect_timeout and proxy_timeout, ensuring reliable handling in diverse environments such as microservices architectures or legacy protocol gateways.[67]
Compared to the HTTP module, the mail and stream proxies are more streamlined, emphasizing efficient forwarding and basic load balancing over advanced content manipulation or full protocol server emulation.[68] Health checks for upstreams are available (TCP in open source, UDP in NGINX Plus), but the module lacks HTTP-specific optimizations like caching or URL rewriting, making it ideal for raw socket-level traffic.[68]
Configuration and Modules
Configuration Basics
Nginx's configuration is managed through a hierarchical text-based file structure, primarily centered on the mainnginx.conf file, which defines the server's behavior across various contexts. The configuration employs a block-based syntax where directives are organized into nested contexts such as main (global settings), events (connection handling), http (HTTP-specific configurations), server (virtual server blocks), and location (URI-specific rules). This structure allows for modular organization, where the include directive can incorporate additional files, such as MIME type definitions via include mime.types;, to enhance maintainability and separate concerns like site-specific settings in directories such as sites-enabled.[69]
Essential directives form the foundation of this setup. In the main context, worker_processes specifies the number of worker processes, often set to auto to match the number of CPU cores for optimal performance. The events block includes worker_connections, which limits the maximum simultaneous connections per worker process, typically set to 1024 or higher depending on system resources. Within the http context, global HTTP settings are defined, such as including MIME types for proper content serving. Server blocks use listen to bind to ports (e.g., listen 80;) and server_name to match domain names (e.g., server_name example.com;). Location blocks handle request routing, with root directing to the document directory (e.g., root /var/www/html;) or proxy_pass forwarding to upstream servers (e.g., proxy_pass http://backend;).[69][70]
Configuration changes are applied without downtime using a signal-based reload mechanism. The nginx -t command validates syntax before reloading, ensuring no errors in the configuration files. To reload, administrators send the HUP signal via nginx -s reload or kill -HUP <pid>, where the process ID (PID) is stored in a file like /var/run/nginx.pid for easy access and management. This approach parses the new configuration while preserving active connections.[71][69]
Best practices emphasize separation of concerns to simplify administration and reduce errors. For instance, enabling sites through symlinks in sites-enabled while storing actual configs in sites-available allows easy activation or deactivation without editing core files. Error handling involves monitoring logs such as access.log and error.log for debugging, while the PID file ensures reliable process control during restarts or upgrades. These practices promote scalability and reliability in production environments.[69][71]
Modules and Extensibility
Nginx extends its core functionality through a modular architecture, where modules handle specific tasks such as processing HTTP requests, managing mail protocols, or proxying streams. Modules are broadly categorized into core, HTTP, mail, and stream types. Core modules provide essential infrastructure, including configuration parsing (ngx_core_module), event processing, and process management. HTTP modules operate within the http context to handle web server operations, such as the ngx_http_core_module for request routing and the ngx_http_upstream_module for backend communication. Mail modules support proxying for protocols like IMAP, POP3, and SMTP, with ngx_mail_core_module managing session establishment and authentication. Stream modules enable TCP and UDP proxying, exemplified by ngx_stream_core_module for upstream connections and protocol-agnostic traffic handling.[40][72] Modules can be integrated as static or dynamic components. Static modules are compiled directly into the Nginx binary during the build process, ensuring tight integration but requiring recompilation for additions or changes. Dynamic modules, introduced in version 1.9.11, allow runtime loading without rebuilding the core binary, facilitating easier distribution and updates for third-party extensions.[30][73] Dynamic modules are loaded using theload_module directive in the main configuration context, specifying the path to the shared object file, such as load_module modules/ngx_http_geoip_module.so;. This directive must appear before other configuration blocks and is processed at startup or reload. For third-party modules, compilation involves the ./configure script with the --add-dynamic-module=/path/to/module option to generate the .so file, followed by placement in the modules directory and loading via the directive. Unlike Apache's DSO system, Nginx lacks a general-purpose runtime plugin loader for arbitrary code; extensions require C-based module development and compilation.[40][42][73]
Prominent extensions include the ngx_lua module for embedding Lua scripting, enabling dynamic request processing and integration with external services; it is commonly bundled in OpenResty, a distribution that patches Nginx with LuaJIT support. The GeoIP module (ngx_http_geoip_module), which uses MaxMind's legacy GeoIP databases discontinued in 2019 with no further updates, adds geolocation variables based on client IP addresses, allowing conditional routing or logging; for current geolocation needs, third-party modules supporting MaxMind GeoIP2 databases, such as ngx_http_geoip2_module, are recommended.[31][74][75][76] The headers-more module extends header manipulation beyond core capabilities, permitting addition, setting, or clearing of arbitrary request and response headers. In NGINX Plus, commercial modules like App Protect provide web application firewall functionality, integrating threat detection and mitigation as a dynamic loadable component.[77][78]