Fact-checked by Grok 2 weeks ago

Nginx

Nginx (stylized as NGINX, pronounced "engine-x") is an open-source software application that functions primarily as an HTTP web server, reverse proxy server, content cache, load balancer, TCP/UDP proxy server, and mail proxy server (supporting IMAP, POP3, and SMTP protocols). Developed with a focus on high performance and low resource utilization, it employs an event-driven, asynchronous architecture using scalable non-blocking I/O, allowing it to handle thousands of concurrent connections efficiently on a single server; for instance, it can manage 10,000 inactive HTTP keep-alive connections using only about 2.5 MB of memory. Released under the 2-clause BSD License, Nginx is known for its flexibility, modular configuration, and ability to perform zero-downtime upgrades and reconfigurations. Originally created by software engineer to address the —the challenge of handling 10,000 concurrent connections on a single —Nginx was first publicly released on October 4, 2004. Sysoev developed it while working at Rambler , a , where it proved effective in managing high traffic loads. Over the years, Nginx has evolved into a cornerstone of modern web infrastructure, supporting features like , , SSL/TLS termination, and API gateway capabilities in its commercial variants. By the end of 2019, Nginx was powering more than 475 million websites worldwide, and in May 2021, it surpassed Apache HTTP Server to become the most widely used web server globally. As of November 2025, it holds a 33.2% market share of websites with known web servers, according to W3Techs. This widespread adoption stems from its efficiency in diverse environments, including cloud-native applications, microservices, and Kubernetes deployments, where it excels in traffic management and security. In 2019, the open-source project was acquired by F5, Inc., leading to the development of NGINX Plus—a commercial edition offering advanced features like enhanced monitoring, API management, and enterprise support—while the core open-source version remains freely available and actively maintained. Today, Nginx continues to drive innovation in application delivery, with ongoing updates supporting platforms from FreeBSD and Linux to Windows and macOS.

Introduction

Definition and Purpose

Nginx is an that functions as an HTTP , server, load balancer, content cache, TCP/UDP , and mail proxy server. It was created by software in 2004 specifically to address the , which involves efficiently handling up to 10,000 concurrent connections on a single server. This design enables Nginx to manage high volumes of traffic with minimal resource consumption, making it suitable for demanding web environments. The primary purposes of Nginx include serving static with high efficiency, acting as a to forward requests to backend dynamic applications, and providing load balancing to distribute traffic across multiple servers for high-traffic websites. Nginx employs an that allows it to process multiple requests asynchronously without blocking, supporting scalable performance. Nginx is distributed under the 2-clause BSD license, ensuring its core remains free and open-source for broad adoption. Additionally, NGINX Plus serves as a commercial variant, offering enhanced features, enterprise support, and advanced modules while building on the open-source foundation.

Popularity and Adoption

Nginx commands a significant market share in the web server landscape, utilized by 33.2% of all websites with a known web server as of November 20, 2025, outpacing Apache's 25.0%. This positioning reflects its robust growth, with Netcraft surveys showing Nginx achieving gains in total sites throughout 2025; for example, a 22.8 million site increase in September (to 24.9% share across surveyed domains) and 4.6 million in October (to 24.96% share), while active sites gained 1.45 million in October (to 17.82% share). Note that W3Techs measures usage on active websites, while Netcraft surveys total and active sites/domains, leading to differing share estimates. The software's adoption has surged among high-traffic organizations, including for streaming delivery, for software distribution, and for mission-critical web services, where it handles millions of concurrent connections efficiently. Its popularity extends to containerized environments like and , with 42% of organizations running workloads on containers and 24% using for orchestration, often integrating Nginx as an ingress controller; similarly, it thrives on cloud platforms such as AWS Elastic Kubernetes Service and Cloud Run. Key drivers of Nginx's widespread use include its resource footprint, which minimizes memory and CPU demands compared to thread-based alternatives, and its scalability for handling high volumes of concurrent connections through an event-driven model. These attributes facilitate seamless integration with modern architectures and platform engineering practices, adopted by 65% of surveyed organizations in . According to the F5 2025 NGINX Annual Survey (October 2025), Nginx is increasingly used in infrastructure as a default front door, with 25% of respondents applying agentic for . As of 2025, Nginx maintains dominance among the top 1 million websites, powering a substantial portion of high-profile domains while capturing preference in approximately 65% of new deployments, underscoring a shift toward performance-oriented solutions.

History

Early Development ()

Nginx was conceived in 2002 by , a working as a systems administrator at Rambler Media, one of Russia's leading internet search engines at the time. Sysoev developed the software to overcome the performance limitations of , which struggled with high concurrency and traffic spikes on Rambler's platform. The primary motivation was addressing the —the challenge of efficiently managing at least 10,000 simultaneous connections—which Apache's process-per-connection model could not handle without significant resource overhead. To solve this, Sysoev implemented an asynchronous, non-blocking I/O model that allowed a single thread to manage multiple connections efficiently, drawing on techniques. This approach was rigorously tested on high-load websites, including Rambler, where it demonstrated superior scalability compared to traditional s. The initial development focused on creating a HTTP capable of serving static under extreme loads, prioritizing low usage and high throughput. The first public release, version 0.1.0, occurred on October 4, 2004, marking Nginx's debut as an open-source project under a BSD-like license. Early adopters in quickly recognized its efficiency for static file delivery. In 2005, version 0.2.0 enhanced the software by adding full HTTP/1.1 protocol support, enabling better compliance with web standards and improved handling of persistent connections. A significant milestone came with version 0.5.0, released on December 4, 2006, which introduced basic load balancing features in the upstream module, including the ip_hash directive and parameters such as max_fails and fail_timeout, expanding Nginx's utility beyond static serving to dynamic acceleration and distribution across backend servers. These additions solidified its role in high- environments, with continued testing on demanding properties validating its reliability.

Expansion and Commercialization (2010s)

During the early , Nginx experienced rapid adoption as a high-performance and solution, with W3Techs reporting that it powered 6.8% of the top 1 million websites by rankings in 2011. This growth was bolstered by the release of version 1.0.0 on 12, 2011, marking the first version of the software after years of development, and including refined HTTP capabilities for reverse proxying and load balancing. The release solidified Nginx's reliability for production environments, contributing to its appeal among developers and operators handling high-traffic applications. In July 2011, , Nginx's creator, co-founded Nginx, Inc. alongside Maxim Konovalov and Andrew Alexeev to provide commercial support, training, and enterprise-grade enhancements for the open-source project. This shift enabled dedicated resources for accelerating development and addressing the growing demand from businesses. The company launched NGINX Plus in August 2013 as its first commercial product, offering advanced features beyond the open-source version, such as enhanced load balancing, application firewall capabilities, and later integrations like (JWT) authentication introduced in release R10 in 2016. Key open-source releases further drove Nginx's expansion, including version 1.9.5 in September 2015, which introduced stable support for to improve multiplexing and performance over persistent connections. In April 2017, version 1.13.0 added dynamic module loading, allowing administrators to extend functionality at runtime without recompiling the server, which simplified customization and third-party integrations. The 2010s also saw significant community and ecosystem growth, exemplified by , a distribution of Nginx that integrates the scripting language for dynamic content handling and was first developed in 2009 by Yichun "agentzh" Zhang at Yahoo! China. This integration enabled powerful extensions like inline scripting for APIs and edge computing, fostering adoption by high-traffic platforms such as , which leveraged Nginx with Lua for its in the early . By the end of the decade, Nginx powered a substantial portion of the internet's busiest sites, culminating in its acquisition by F5 Networks in May 2019 for $670 million to enhance multi-cloud application delivery.

Modern Enhancements (2020s)

In the early , following F5's acquisition of Nginx Inc. in May 2019, Nginx was integrated into F5's broader application delivery portfolio, enhancing multi-cloud capabilities for application services across hybrid environments. This integration allowed Nginx to leverage F5's infrastructure for improved scalability in modern deployments, while maintaining its open-source roots. Key releases in the decade advanced Nginx's protocol support and scripting features. The 1.25.0 mainline version, released on May 23, 2023, introduced experimental support for via the transport protocol, enabling faster and more reliable web connections over . Building on this, the 1.27.0 version, released on May 29, 2024, included enhancements to handling, such as improved processing of QUIC sessions and bug fixes for stability. Most recently, the 1.29.3 mainline version, released on October 28, 2025, incorporated njs 0.9.4, which added HTTP forward support for the ngx.fetch() in both HTTP and modules, alongside memory consumption optimizations to reduce resource usage in scripting scenarios. Nginx adapted to containerized and cloud-native environments with enhanced support for orchestration platforms. In 2023, the NGINX Gateway Fabric project emerged as an open-source implementation of the Gateway , using Nginx as the data plane to manage ingress traffic more flexibly than traditional Ingress controllers, supporting hybrid and multi-cloud clusters. This development aligned with a growing emphasis on , where Nginx serves as a lightweight gateway for low-latency processing at distributed network edges, handling tasks like and JWT validation in dynamic infrastructures. Security and performance received ongoing attention through regular patches addressing vulnerabilities. For instance, version 1.27.4, released on February 5, 2025, fixed a critical issue in TLSv1.3 virtual server handling that could allow unauthorized session resumption across configurations (CVE-2025-23419), bolstering protection against certificate bypass attacks. These updates, combined with routine refinements, ensured Nginx's robustness in high-traffic, threat-prone settings.

Architecture

Core Components

Nginx operates through a multi-process designed for efficiency and reliability. At its core is a single master that serves as the parent supervisor. This reads and evaluates the files upon startup, binds to the specified listening sockets, and spawns the necessary worker processes. It also monitors the workers, handles signals for operations such as reloading the or graceful shutdowns, and facilitates restarts without interrupting service. The worker processes are the primary handlers of client requests, with their number tunable via the worker_processes directive in the , often set to match the number of CPU cores for optimal performance. Each worker process operates independently, using an event-driven model to manage multiple connections concurrently without blocking, thereby enabling high concurrency. These processes perform the actual work of processing incoming requests, such as serving static files or proxying to upstream servers. If proxy caching is enabled, additional dedicated processes support cache management. The cache loader process activates once at startup to scan the disk cache and populate the in-memory metadata in the zone, ensuring quick access to cached . Complementing this, the cache manager process runs periodically to evict expired or least-recently-used items from the cache, maintaining its size within configured limits and preventing disk overflow. Nginx employs a , where the binary provides foundational functionality such as process management and event handling, while loadable modules extend capabilities for specific protocols. modules include those for HTTP processing, mail proxying, and stream (/) handling, integrated at or loaded dynamically. Unlike some servers, Nginx lacks built-in scripting support in its , but its architecture allows extensibility through third-party modules, such as those in , which add features like scripting without altering the binary.

Event-Driven Model

Nginx employs an asynchronous, that enables efficient handling of concurrent connections without blocking operations. This model relies on non-blocking I/O operations, where the server does not wait for slow network events but instead registers them and proceeds to other tasks. Worker processes, which are single-threaded, utilize operating system mechanisms such as on , on BSD systems, or select as a fallback for I/O . These mechanisms allow a single worker to monitor multiple file descriptors simultaneously for readiness events like incoming data or connection closures, facilitating the management of thousands of connections per worker. At the core of this is the event loop within each worker process, which continuously polls for events using the aforementioned methods and dispatches them to appropriate handlers. Incoming requests are processed in a series of sequential phases, such as post-read (for initial header processing), pre-access (for preliminary checks like ), access (for ), and post-access, among others. Handlers in these phases can suspend processing by returning specific codes (e.g., NGX_AGAIN for asynchronous continuation), allowing the event loop to resume later without blocking the worker. This phased approach ensures that resource-intensive or delayed operations, like disk I/O or upstream communication, do not halt progress on other connections. The master process oversees worker creation and reloading but does not directly participate in request handling. This design contrasts sharply with traditional thread-per-request models, such as those in , where each connection spawns a new or , leading to high overhead from context switching and memory allocation. Instead, Nginx reuses existing connections and workers, enabling scalability to handle the —supporting 10,000 or more simultaneous connections—efficiently on multi-core systems by distributing load across multiple workers. On modern hardware, this allows for hundreds of thousands of concurrent connections with minimal resource consumption. , including for tasks like load balancing, is facilitated through zones, which use a slab allocator and mutexes to store data such as session states or metadata accessible by all workers.

Features

Web Server and HTTP Proxy Capabilities

Nginx functions as a high-performance capable of efficiently serving static content such as files, images, and other assets directly from the filesystem. It utilizes the and alias directives in the ngx_http_core_module to map request URIs to file paths, enabling direct delivery of files with optimizations like the sendfile directive, which leverages the operating system's sendfile() for low-overhead transfers. This approach minimizes CPU usage and supports for concurrent handling of multiple requests. To enhance delivery efficiency, Nginx integrates compression and dynamic content features for static files. The ngx_http_gzip_module enables on-the-fly gzip compression of responses, configurable via the gzip on; directive, which typically reduces data size by 50% or more for compressible content like text and HTML, applied based on MIME types such as text/html and minimum response lengths. Additionally, the ngx_http_ssi_module processes Server-Side Includes (SSI) in responses when enabled with ssi on;, supporting commands like include, echo, and conditional if statements to embed dynamic elements into static pages. For directory browsing, the ngx_http_autoindex_module generates formatted listings (e.g., HTML, JSON) of directory contents via autoindex on;, including file sizes and modification times, when no index file is present. As an HTTP proxy, Nginx excels in reverse proxy configurations, forwarding client requests to upstream servers such as Node.js applications or PHP-FPM backends using the proxy_pass directive in the ngx_http_proxy_module. This setup allows Nginx to act as a front-end gateway, handling incoming traffic while delegating dynamic processing to specialized servers, with customizable headers via proxy_set_header to preserve information like the original Host. Forward proxy functionality, which routes outbound requests on behalf of clients, is supported through the njs JavaScript module, with HTTP forward proxy enhancements added for the ngx.fetch() API in njs version 0.9.4 released on October 28, 2025. Nginx supports a range of HTTP protocols to ensure compatibility and performance. It handles HTTP/1.1 by default in proxy contexts via the proxy_http_version directive, enabling features like keepalive connections. support was introduced in version 1.9.5 in 2015 through the ngx_http_v2_module, allowing and header on a per-server basis. , based on over , became stable in version 1.25.0 released in 2023, providing improved and reliability for modern applications. For secure communications, Nginx performs TLS/SSL termination using the ngx_http_ssl_module, which requires and supports directives like ssl_certificate for certificate management, while (SNI) enables hosting multiple SSL virtual hosts on a single by selecting certificates based on the requested domain. Nginx provides essential controls for managing HTTP traffic, including , access restrictions, and detailed logging. The ngx_http_limit_req_module implements rate limiting per key (e.g., client ), configurable with zones like limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;, to prevent by delaying or rejecting excess requests. Access controls are enforced via the ngx_http_access_module's allow and deny directives, which filter requests by or CIDR ranges in sequential order, such as denying specific hosts while permitting networks like 192.168.1.0/24. For monitoring, the ngx_http_log_module records HTTP requests in customizable formats using access_log, supporting variables like $remote_addr and $status, with options for buffering, , and conditional logging to track traffic patterns and errors. These features leverage Nginx's for non-blocking operation, ensuring under high load.

Load Balancing and Caching

Nginx employs load balancing to distribute incoming HTTP requests across multiple backend servers, enhancing and reliability. The default method is , which sequentially directs requests to each server in an upstream group, taking into account server weights for proportional distribution. Other methods include least connections, which routes requests to the server with the fewest active connections to optimize resource utilization; hash, which hashes the client's to ensure consistent routing for session affinity; and generic hash, which uses a configurable key such as a or header for deterministic distribution. These methods support by automatically excluding unhealthy servers from the rotation. Health checks in Nginx actively monitor upstream servers by sending periodic probes, marking servers as unavailable if they exceed a of failures within a specified timeout, thereby enabling seamless to healthy alternatives. The upstream module facilitates this by defining server groups via the upstream directive, where individual s can be assigned weights to influence load distribution—higher weights direct more traffic to capable s. Parameters like max_fails set the number of consecutive failures before a is deemed down (: 1), and fail_timeout defines the duration of unavailability following those failures (: 10 seconds), allowing for dynamic scaling as conditions change. In NGINX Plus, advanced session persistence options such as sticky cookies, routes, and learning from request headers further refine load balancing by maintaining user sessions on the same . For performance optimization, Nginx implements proxy caching to store HTTP responses from upstream servers, reducing and backend load. Caches can be disk-based, storing full responses as files in a designated , or memory-based, holding in a shared for rapid lookups, with automatic eviction of least recently used items when size limits are reached. validation ensures responses remain fresh using directives like proxy_cache_valid to set expiration times based on status codes, while mechanisms such as proxy_cache_min_uses require multiple hits before caching to avoid transient . Cache purging allows selective invalidation of stored items via the HTTP PURGE method, restricted to authorized clients through access controls, preventing unauthorized cache manipulation. Additionally, stale-while-revalidate supports serving slightly outdated content while asynchronously fetching updates, balancing freshness with availability during backend delays. In NGINX Plus, the REST API enables programmatic cache management, including purging and configuration adjustments, with enhancements in recent releases improving session persistence integration for more robust traffic handling.

Mail and Stream Proxy Features

Nginx offers robust mail proxying capabilities for the IMAP, POP3, and SMTP protocols, enabling it to act as an intermediary between clients and backend servers. This functionality requires compilation with the --with-mail and --with-mail_ssl_module options and is configured within a top-level mail context. Key features include support for SASL mechanisms such as , , and CRAM-MD5, typically integrated with an external HTTP server that returns the appropriate details. Additionally, STARTTLS is supported to secure connections, activated via the starttls on; directive in server blocks. Proxying to backend servers occurs through proxy_pass directives, with load distribution options based on client addresses or other rules, and error messages from backend can be passed to clients if configured. For SMTP specifically, the proxy_smtp_auth directive enables AUTH command proxying, while XCLIENT extensions allow passing client parameters to backends for logging or . The module extends Nginx's proxying to non-HTTP traffic, introduced in version 1.9.0 to handle streams, with support added in version 1.9.13 and compatibility for UNIX-domain sockets. This module enables load balancing and proxying for arbitrary protocols, such as databases like or LDAP, VoIP applications including RTMP, and services like DNS or . Configuration uses stream blocks with server directives, where proxy_pass routes traffic to upstream groups supporting methods like (default), least connections, or hash-based distribution. Notable features include SSL/TLS termination or passthrough via the proxy_ssl directive for encrypted connections, access controls through allow and deny rules to restrict based on client , and integrated for monitoring stream activity. Timeouts for connections and operations are tunable with directives like proxy_connect_timeout and proxy_timeout, ensuring reliable handling in diverse environments such as architectures or legacy protocol gateways. Compared to the HTTP module, the mail and stream proxies are more streamlined, emphasizing efficient forwarding and basic load balancing over advanced content manipulation or full protocol server emulation. Health checks for upstreams are available (TCP in open source, UDP in NGINX Plus), but the module lacks HTTP-specific optimizations like caching or URL rewriting, making it ideal for raw socket-level traffic.

Configuration and Modules

Configuration Basics

Nginx's configuration is managed through a hierarchical text-based file structure, primarily centered on the main nginx.conf file, which defines the server's behavior across various contexts. The configuration employs a block-based syntax where directives are organized into nested contexts such as main (global settings), events (connection handling), http (HTTP-specific configurations), server (virtual server blocks), and location (URI-specific rules). This structure allows for modular organization, where the include directive can incorporate additional files, such as type definitions via include mime.types;, to enhance maintainability and separate concerns like site-specific settings in directories such as sites-enabled. Essential directives form the foundation of this setup. In the main context, worker_processes specifies the number of worker processes, often set to auto to match the number of CPU cores for optimal performance. The events block includes worker_connections, which limits the maximum simultaneous connections per worker process, typically set to 1024 or higher depending on system resources. Within the http context, global HTTP settings are defined, such as including MIME types for proper content serving. Server blocks use listen to bind to ports (e.g., listen 80;) and server_name to match domain names (e.g., server_name example.com;). Location blocks handle request routing, with root directing to the document directory (e.g., root /var/www/html;) or proxy_pass forwarding to upstream servers (e.g., proxy_pass http://backend;). Configuration changes are applied without using a signal-based reload mechanism. The nginx -t command validates syntax before reloading, ensuring no errors in the files. To reload, administrators send the HUP signal via nginx -s reload or kill -HUP <pid>, where the process ID () is stored in a file like /var/run/nginx.pid for easy access and management. This approach parses the new while preserving active connections. Best practices emphasize to simplify administration and reduce . For instance, enabling sites through symlinks in sites-enabled while storing actual configs in sites-available allows easy activation or deactivation without editing core files. Error handling involves monitoring logs such as access.log and error.log for debugging, while the PID file ensures reliable process control during restarts or upgrades. These practices promote and reliability in production environments.

Modules and Extensibility

Nginx extends its core functionality through a modular , where modules handle specific tasks such as processing HTTP requests, managing protocols, or proxying streams. Modules are broadly categorized into , HTTP, , and stream types. modules provide essential infrastructure, including configuration parsing (ngx_core_module), event processing, and process management. HTTP modules operate within the http context to handle operations, such as the ngx_http_core_module for request and the ngx_http_upstream_module for backend communication. modules support proxying for protocols like IMAP, POP3, and SMTP, with ngx_mail_core_module managing session establishment and . Stream modules enable and proxying, exemplified by ngx_stream_core_module for upstream connections and protocol-agnostic traffic handling. Modules can be integrated as static or dynamic components. Static modules are compiled directly into the Nginx during the build , ensuring tight but requiring recompilation for additions or changes. Dynamic modules, introduced in version 1.9.11, allow runtime loading without rebuilding the core , facilitating easier distribution and updates for third-party extensions. Dynamic modules are loaded using the load_module directive in the main context, specifying the path to the shared , such as load_module modules/ngx_http_geoip_module.so;. This directive must appear before other configuration blocks and is processed at startup or reload. For third-party modules, compilation involves the ./configure script with the --add-dynamic-module=/path/to/module option to generate the .so file, followed by placement in the modules directory and loading via the directive. Unlike Apache's DSO system, Nginx lacks a general-purpose plugin loader for arbitrary code; extensions require C-based and compilation. Prominent extensions include the ngx_lua module for embedding scripting, enabling dynamic request processing and integration with external services; it is commonly bundled in , a distribution that patches Nginx with support. The GeoIP module (ngx_http_geoip_module), which uses 's legacy GeoIP databases discontinued in 2019 with no further updates, adds geolocation variables based on client IP addresses, allowing conditional routing or logging; for current geolocation needs, third-party modules supporting MaxMind GeoIP2 databases, such as ngx_http_geoip2_module, are recommended. The headers-more module extends header manipulation beyond core capabilities, permitting addition, setting, or clearing of arbitrary request and response headers. In NGINX Plus, commercial modules like App Protect provide web application firewall functionality, integrating threat detection and mitigation as a dynamic loadable component.

Comparisons

With Apache

Nginx and represent two foundational web servers with distinct architectural approaches that influence their suitability for various workloads. Nginx employs an event-driven, asynchronous model that operates in a single-threaded manner, utilizing non-blocking I/O to handle multiple efficiently without spawning new processes or threads for each request. In contrast, traditionally relies on a process- or thread-per-request model, configurable via Multi-Processing Modules (MPMs) such as prefork, which creates a new process for each connection; worker, which uses threads within processes; or , which optimizes for concurrent connections but still incurs higher overhead than Nginx's approach. This difference allows Nginx to better under high concurrency, managing thousands of simultaneous connections with minimal , while 's model provides robustness for environments requiring per-request isolation but can lead to increased memory usage during traffic spikes. Performance benchmarks highlight Nginx's advantages in serving static content and handling high-concurrency scenarios, where it often outperforms by approximately two times in throughput for static files under 512 concurrent connections. For instance, in tests with doubled request loads, Nginx achieves up to 2.4 times the speed of for static assets due to its efficient that avoids context switching overhead. , however, maintains an edge in dynamic content processing when integrated with modules like mod_php, as its threaded model facilitates seamless execution of scripts without the need for external processes, making it more straightforward for traditional dynamic applications. Overall, Nginx's architecture results in lower CPU and memory footprints for static-heavy or proxy-based workloads, with studies showing it sustaining up to 120,000 requests per second on modern hardware compared to 's 70,000 under similar event MPM configurations. Configuration paradigms further differentiate the two servers, with Nginx favoring a declarative, centralized approach using and blocks in its nginx.conf file to define routing and behaviors, which promotes consistency but limits per-directory overrides. Apache, on the other hand, supports flexible .htaccess files for directory-level configurations, enabling runtime changes without server restarts and simplifying shared hosting environments where users need independent tweaks. While Nginx's model enhances by centralizing rules and reducing exposure to user-modifiable files, it often requires server reloads or restarts for certain updates, potentially introducing brief in dynamic setups. In terms of use cases, Nginx is particularly favored for reverse proxying, load balancing, and high-traffic static content delivery, where its lightweight footprint and concurrency handling excel in modern architectures like or content delivery networks. Apache remains prevalent in shared hosting scenarios, leveraging its extensive module ecosystem and .htaccess support for easy customization across multiple virtual hosts on a single server. By November 2025, Nginx has surpassed Apache in overall , powering 33.3% of known websites compared to Apache's 25.3%, reflecting a shift toward its adoption in new deployments for scalable, performance-oriented applications.

With Other Web Servers

Nginx and Lighttpd are both lightweight, event-driven web servers designed for high-performance environments with low resource consumption. Nginx distinguishes itself through richer proxying and load balancing features, enabling it to handle complex setups and distribute traffic across multiple backends efficiently, which suits dynamic, high-traffic applications. In contrast, Lighttpd prioritizes simplicity and minimalism, making it ideal for embedded systems, resource-constrained hardware, and serving static content where advanced proxying is unnecessary. Compared to , Nginx offers superior customization for enterprise scaling, including extensive module support for fine-tuned configurations and robust non-HTTP proxy capabilities like stream and mail proxying. , however, streamlines deployment for beginners with its automatic via built-in integration and native support, reducing setup complexity for modern web applications though it lags in modular extensibility for large-scale environments. Nginx functions primarily as an on-premises and , providing control over local and seamless integration with external CDNs, whereas managed services like deliver built-in global distribution, caching, and DDoS protection across a distributed without requiring on-site . While Nginx can serve as an origin server behind for enhanced performance, it lacks the inherent global routing and automatic scaling of platforms, necessitating additional tools for comparable worldwide reach. Nginx's facilitates in cloud-on-premises setups, allowing flexible combinations with other tools for diverse workloads. In 2025, it is particularly favored as an ingress controller in environments over traditional servers due to its event-driven efficiency and compatibility with container orchestration for managing external traffic to .

NGINX Unit

NGINX Unit is an open-source dynamic server released in 2017 by NGINX, Inc., designed as a lightweight runtime for hosting polyglot applications in a single binary without dependencies on the core NGINX . It natively supports execution of application code across multiple programming languages, including , , , Go, , , , and , enabling developers to run diverse or monolithic apps in one instance. A core strength of NGINX Unit lies in its key features, such as JSON-based dynamic that allows runtime updates to applications, routes, and listeners without requiring server restarts or reloads. The control API, accessible via UNIX sockets or , facilitates programmatic management of these changes, while each application runs in isolated processes to enhance security and . Process isolation includes options for /GID mapping, namespaces, , and filesystem restrictions, preventing interference between apps. The architecture of NGINX Unit centers on a supervisor (controller) process that oversees configuration and spawns the router and worker components. The non-privileged router process manages incoming client connections asynchronously using or for high concurrency, forwarding requests to on-demand worker processes that execute the application code. This design ensures low —handling 10,000 inactive HTTP keep-alive connections requires only a few MB—and supports features like SSL/TLS termination with , session caching, and built-in statistics via the . NGINX Unit excels in use cases involving containerized deployments and architectures, where its ability to dynamically reconfigure polyglot apps reduces operational complexity compared to traditional stacks requiring separate runtimes per language. For instance, it streamlines workflows by enabling seamless scaling and updates in environments like or . The latest version, 1.34.2 released on February 26, 2025, provides maintenance fixes for Java WebSocket handling, building on version 1.34.0's additions of initial OpenTelemetry tracing support and JSON-formatted access logging for better . As of October 2025, NGINX Unit is archived and unmaintained, with no further updates or security fixes planned.

NGINX Plus

NGINX Plus is the commercial edition of NGINX, launched in 2013 as a subscription-based offering by NGINX Inc. and now provided by F5 following its 2019 acquisition. It extends the open-source NGINX core with proprietary enterprise tools for enhanced application delivery, including advanced load balancing, security, and monitoring capabilities tailored for large-scale deployments. Key exclusive features in NGINX Plus include advanced load balancing with for automatic detection and to share state across clustered instances, enabling resilient high-availability setups. It incorporates F5 WAF for NGINX, a lightweight (previously known as NGINX App Protect), to defend against Top 10 threats and other Layer 7 attacks. The platform also provides an integrated analytics dashboard for visualizing real-time metrics and native support for (JWT) authentication alongside 2.0 and protocols to secure API interactions. For integration and automation, NGINX Plus exposes a REST API that allows dynamic reconfiguration of upstream groups, key-value stores, and SSL certificates without reloading the configuration, streamlining workflows. Live activity monitoring via the built-in dashboard delivers granular insights into traffic, errors, and resource utilization in real time. It supports hybrid cloud architectures, integrating with platforms like AWS, Azure, and for consistent performance across distributed environments. In 2025, NGINX Plus Release 35 (R35), based on NGINX 1.29.0, enhanced support with improved handling for faster, more reliable connections and expanded API gateway features, including better and for . Pricing is offered in scalable subscription tiers, with options for standard and premium support levels to match organizational needs, starting from per-instance licensing that scales with throughput and features.

References

  1. [1]
    nginx
    ### Summary of NGINX
  2. [2]
    Celebrating 20 Years of NGINX - NGINX Community Blog
    Oct 4, 2024 · NGINX, created to solve the C10K problem, was first publicly released on October 4, 2004, and is now a cornerstone of modern web infrastructure.Missing: facts | Show results with:facts
  3. [3]
    Do Svidaniya, Igor, and Thank You for NGINX
    Jan 18, 2022 · By the end of 2019, NGINX was powering more than 475 million websites and in 2021, NGINX became the most widely used web server in the world.
  4. [4]
    Now the World's #1 Web Server, NGINX Looks Forward to an Even ...
    Jun 10, 2021 · NGINX is now the most popular web server in the world, edging out Apache HTTP Server. Of course we're thrilled to be recognized for our years of innovation and ...Missing: facts | Show results with:facts
  5. [5]
    NGINX Is Now Officially Part of F5
    May 9, 2019 · Just last month NGINX became the most used web server in the world for the first time ever! We have the corporate backing and shared vision ...Missing: facts | Show results with:facts<|control11|><|separator|>
  6. [6]
    Welcome to F5 NGINX
    What is NGINX? NGINX has evolved from a web server to a comprehensive platform for app delivery, optimization, and security in Kubernetes environments.Cloud Application Delivery... · Products · NGINX One · NGINX App Protect WAFMissing: definition | Show results with:definition
  7. [7]
    NGINX Reverse Proxy | NGINX Documentation
    Configure NGINX as a reverse proxy for HTTP and other protocols, with support for modifying request headers and fine-tuned buffering of responses.Using NGINX and NGINX Plus... · NGINX SSL Termination · Compression
  8. [8]
    Inside NGINX: How We Designed for Performance & Scale
    Jun 10, 2015 · NGINX stands out with a sophisticated event-driven architecture that enables it to scale to hundreds of thousands of concurrent connections on modern hardware.
  9. [9]
    License - nginx
    ... Nginx, Inc. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the ...
  10. [10]
    Releases | NGINX Documentation
    Release information for F5 NGINX Plus, a complete application delivery platform, including new features and a list of supported platforms.Missing: facts | Show results with:facts
  11. [11]
    Usage statistics and market shares of web servers - W3Techs
    Nginx is used by 33.3% of all the websites whose web server we know. Nginx. 33.3%. Apache. 25.3%. Cloudflare Server. 24.8%. LiteSpeed. 14.8%. Node.js. 5.2% ...Nginx · Ranking · Market Position · Web Panel
  12. [12]
    September 2025 Web Server Survey | Netcraft
    Sep 30, 2025 · nginx experienced the largest gain of 22.8 million sites (+7.34%) this month, and now accounts for 24.9% (+1.04pp) of sites seen by Netcraft. ...
  13. [13]
    What Is Nginx? A Basic Look at What It Is and How It Works - Kinsta
    Oct 7, 2025 · Some high-profile companies using Nginx include Autodesk, Atlassian ... Netflix, NASA, and even WordPress.com. Apache's usage, on the ...
  14. [14]
    Current trends in cloud native technologies, platform engineering ...
    Oct 13, 2025 · Container adoption still has room to run: Only 42% of respondents run workloads on containers, and microservices adoption sits at just 31%.
  15. [15]
    Using NGINX or NGINX Plus as the Ingress Controller for Amazon ...
    This guide explains how to use NGINX Open Source or F5 NGINX Plus with NGINX Ingress Controller for Amazon Elastic Kubernetes Services (EKS).
  16. [16]
    Frontend proxying using Nginx | Cloud Run
    This page shows how to use NGINX as a frontend proxy for your application container. This is useful if you want to process requests or responses.
  17. [17]
    Nginx Market Report, August 2025 - W3Techs
    The report covers Nginx usage, historical trends, version usage, market position, and usage with other web servers.
  18. [18]
    Nginx vs Apache: Which Web Server Wins in 2025? - Wildnet Edge
    Aug 27, 2025 · A survey conducted in early 2025 also revealed that 65% of new deployments are leaning towards Nginx, highlighting a shift in preference ...Missing: trends | Show results with:trends
  19. [19]
    This Russian Software Is Taking Over the Internet - WIRED
    Sep 6, 2013 · Nginx was created as a pet project by a Russian systems administrator named Igor Sysoev. The 42-year-old started work on the project in 2002 ...
  20. [20]
    What's yours is ours Rambler Group claims exclusive rights to ...
    Dec 13, 2019 · Igor Sysoev started developing Nginx in 2002 while working as a system administrator at Rambler. Two years later, he released the first ...Missing: history | Show results with:history
  21. [21]
    Igor Sysoev, The One-Man-Show Behind NGINX - Growfers
    C10K was a term used in 1999 to describe the inability of web servers to handle large numbers of simultaneous connections - more than 10,000, and you had a C10K ...
  22. [22]
    The Evolution of Nginx | ScalaHosting Blog
    Aug 2, 2024 · The history of Nginx. Oct 2004 – Nginx 0.1.0 released; Jan 2005 – FastCGI module added to Nginx 0.1.14; May 2005 – SSI module added to Nginx 0.1 ...
  23. [23]
    2011 - nginx news
    Now W3Techs reports that 6.8% of the top 1 million sites on the web (according to Alexa) use nginx. And 46.9% of top Russian sites use nginx. Netcraft ...
  24. [24]
    listing. *) Change - nginx
    *) Bugfix: nginx could not be built on FreeBSD 6 and early versions; the bug had appeared in 0.7.46. *) Bugfix: nginx could not be built on MacOSX; the bug ...Missing: history | Show results with:history
  25. [25]
    Use NGINX Plus and Auth0 to Authenticate API Clients
    Sep 6, 2016 · The NGINX Plus R10 release comes with native support for the JWT authentication standard. Learn how this can change the way your app handles authentication.
  26. [26]
    NGINX Open Source 1.9.5 Released with HTTP/2 Support
    Sep 22, 2015 · Today we proudly announce that HTTP/2 has been committed to the open source repo and is now officially available as part of NGINX version 1.9.5.Missing: 1.9.0 | Show results with:1.9.0
  27. [27]
    NGINX change log
    *) Performance improvements and bugfixes in HTTP/3. Changes with nginx 1.27.4 05 Feb 2025 *) Security: insufficient check in virtual servers handling with TLSv1 ...Missing: web | Show results with:web
  28. [28]
    openresty/lua-nginx-module: Embed the Power of Lua into ... - GitHub
    By leveraging Nginx's subrequests, this module allows the integration of the powerful Lua threads (known as Lua "coroutines") into the Nginx event model.Ngx_stream_lua_module · Issues 348 · Pull requests 38 · Actions
  29. [29]
    Pushing Nginx to its limit with Lua - The Cloudflare Blog
    Dec 8, 2012 · Nginx+Lua is a self-contained web server embedding the scripting language Lua. Powerful applications can be written directly inside Nginx without using cgi, ...Benefits Of Nginx+lua · Getting Nginx+lua Installed · Show Me The Code Already!
  30. [30]
    F5 Completes Acquisition of NGINX | Press Release
    May 9, 2019 · On March 11, 2019, F5 announced that it had entered into an agreement to acquire NGINX. The combined company will enable multi-cloud ...
  31. [31]
    2023 - nginx news
    nginx-1.25.0 mainline version has been released, featuring experimental HTTP/3 support. 2023-05-10. unit-1.30.0 version ...
  32. [32]
    Changes - nginx
    Changes with njs 0.2.0. Release Date: 03 April 2018. Feature: reporting njs version by CLI. Feature: textual description for type converting exceptions ...Missing: history early
  33. [33]
    F5 NGINX Gateway Fabric | NGINX Documentation
    F5 NGINX Gateway Fabric. Implement the Gateway API across hybrid and multi-cloud Kubernetes environments with a secure, fast, and reliable NGINX data plane.Build NGINX Gateway Fabric · Upgrade NGINX Gateway Fabric · API referenceMissing: 2023 | Show results with:2023
  34. [34]
    F5 NGINX API Gateway
    NGINX API Gateway delivers robust API security with JWT validation, mTLS, rate limiting, and one-click WAF setup. Designed for Kubernetes, cloud, and on-prem, ...Missing: 2020s | Show results with:2020s
  35. [35]
    NGINX TLS session resumption vulnerability CVE-2025-23419 - MyF5
    Feb 5, 2025 · NGINX OSS compiled with LibreSSL or BoringSSL is not vulnerable to this issue. NGINX Plus is only compiled with OpenSSL. Security Advisory ...Missing: improvements | Show results with:improvements
  36. [36]
    Control NGINX Processes at Runtime
    NGINX relies on OS-dependent mechanisms to efficiently distribute requests among worker processes. The number of worker processes is defined by the ...
  37. [37]
    Core functionality - nginx
    Determines whether worker processes are started. This directive is intended for nginx developers. Syntax: multi_accept on | off ;.Missing: master cache manager
  38. [38]
    NGINX Content Caching | NGINX Documentation
    The cache loader runs only once, right after NGINX starts. It loads metadata about previously cached data into the shared memory zone. Loading the whole cache ...Missing: master | Show results with:master
  39. [39]
    Development guide - nginx
    When a worker process, the cache manager process or the cache loader process receives this signal, it destroys the cycle pool and exits. The variable ...
  40. [40]
    Dynamic Modules | NGINX Documentation
    Dynamic modules are shared object files ( .so ) that can be loaded at runtime using the load_module directive in the NGINX configuration. NGINX maintains the ...Uninstalling a dynamic module · NGINX ModSecurity WAF · NGINX Developer KitMissing: 1.13.0 2017
  41. [41]
    Inside NGINX: How We Designed for Performance & Scale
    Jun 10, 2015 · NGINX stands out with a sophisticated event-driven architecture that enables it to scale to hundreds of thousands of concurrent connections on modern hardware.How Does Nginx Work? · Inside The Nginx Worker... · Nginx Is A True GrandmasterMissing: documentation | Show results with:documentation
  42. [42]
    The Architecture of Open Source Applications (Volume 2)nginx
    ### Summary of Nginx's Event-Driven Model
  43. [43]
  44. [44]
  45. [45]
  46. [46]
  47. [47]
  48. [48]
  49. [49]
  50. [50]
  51. [51]
    Support for QUIC and HTTP/3 - nginx
    Support for QUIC and HTTP/3 protocols is available since 1.25.0, it is included in Linux binary packages. Please refer to the ngx_http_v3_module ...Missing: 2023 | Show results with:2023
  52. [52]
  53. [53]
    Module ngx_http_ssl_module - nginx
    The ngx_http_ssl_module module provides the necessary support for HTTPS. This module is not built by default, it should be enabled with the --with- ...
  54. [54]
  55. [55]
  56. [56]
  57. [57]
    nginx documentation
    Join us on the new NGINX Community Forum to connect with users, discover the latest community activity, and troubleshoot issues together.Beginner’s Guide · Alphabetical index of directives · Installing nginx · Server namesMissing: license | Show results with:license
  58. [58]
    HTTP Load Balancing | NGINX Documentation
    Load balance HTTP traffic across web or application server groups, with several algorithms and advanced features like slow-start and session persistence.
  59. [59]
  60. [60]
    Module ngx_http_upstream_module - nginx
    The ngx_http_upstream_module module is used to define groups of servers that can be referenced by the proxy_pass, fastcgi_pass, uwsgi_pass, scgi_pass, ...Missing: capabilities | Show results with:capabilities
  61. [61]
    Live Activity Monitoring | NGINX Documentation
    This article describes how to configure and use runtime monitoring services in NGINX Plus: the interactive Dashboard and NGINX Plus REST API.
  62. [62]
    Configure NGINX as a Mail Proxy Server
    This article will explain how to configure F5 NGINX Plus or NGINX Open Source as a proxy for a mail server or an external mail service.Prerequisites · Configuring SMTP/IMAP/POP3... · Setting up Authentication for a...
  63. [63]
    Module ngx_mail_proxy_module - nginx
    Indicates whether to pass the error message obtained during the authentication on the backend to the client. Usually, if the authentication in nginx is a ...
  64. [64]
    Module ngx_stream_proxy_module - nginx
    Makes outgoing connections to a proxied server originate from the specified local IP address . Parameter value can contain variables (1.11.2).Missing: web | Show results with:web
  65. [65]
    TCP and UDP Load Balancing | NGINX Documentation
    This chapter describes how to use F5 NGINX Plus and NGINX Open Source to proxy and load balance TCP and UDP traffic.Introduction · Configuring TCP or UDP load... · On-the-fly configuration
  66. [66]
    Beginner's Guide - nginx
    This guide describes how to start and stop nginx, and reload its configuration, explains the structure of the configuration file and describes how to set up ...Missing: driven | Show results with:driven
  67. [67]
    Module ngx_http_core_module - nginx
    Provides the configuration file context in which the HTTP server directives are specified. This directive appeared in version 0.7.24.
  68. [68]
  69. [69]
    [PDF] NGINX modules reference - F5
    Aug 2, 2023 · Offers additional features on top of the free open source NGINX version. • Prepared, tested and supported by NGINX core engineering team. For ...
  70. [70]
    NGINX Dynamic Modules: How They Work
    Apr 7, 2017 · You can compile NGINX with several standard modules dynamically loaded, by using the =dynamic suffix. You can compile third‑party modules ...Missing: 1.13.0 | Show results with:1.13.0
  71. [71]
    Module ngx_http_geoip_module - nginx
    The ngx_http_geoip_module module (0.8.6+) creates variables with values depending on the client IP address, using the precompiled MaxMind databases.geoip_country · geoip_city
  72. [72]
    Headers-More | NGINX Documentation
    The Headers-More dynamic module extends the NGINX core Headers module by enabling the functionality of setting or clearing input and output headers.
  73. [73]
    F5 WAF for NGINX
    Protecting your applications and APIs from attacks is easy using F5 WAF for NGINX, an advanced, lightweight, and high-performance web application firewall (WAF) ...
  74. [74]
    Apache vs Nginx: Practical Considerations - DigitalOcean
    Mar 17, 2022 · Connection Handling Architecture. One difference between Apache and Nginx is the specific way that they handle connections and network traffic.
  75. [75]
    Nginx vs Apache: Web Server Showdown - Kinsta
    Jul 15, 2024 · The biggest difference between Apache and Nginx is in the underlying architecture of the way they handle requests. Apache processes requests ...Apache · Nginx · The Matter of Caching: Nginx... · Handling Requests: Nginx vs...
  76. [76]
    Nginx vs Apache | ScalaHosting Blog
    Nov 5, 2024 · Nginx performs almost two times faster for static content than Apache with 512 concurrent connections and 2.4 times faster with double the requests.Performance (Static Content... · Performance (Dynamic... · Security – winner: Nginx<|separator|>
  77. [77]
    NGINX vs Apache: Which web server is better - Hostinger
    Sep 22, 2025 · In this section, we will explore the performance comparison of Apache and NGINX for static and dynamic content request processing. Static ...
  78. [78]
    Nginx vs Apache: Which Web Server Is Faster in 2025? - Blog
    Sep 24, 2025 · In raw numbers: On a 16-core EPYC with 32 GB RAM, Nginx sustained ~120k RPS on static files, while Apache (event MPM) peaked at ~70k RPS.Missing: surpassing | Show results with:surpassing
  79. [79]
    How To Migrate from an Apache Web Server to Nginx on an Ubuntu ...
    Dec 17, 2013 · Rewrite Translations and .​​ One of the most fundamental differences is that Nginx does not respect directory overrides. Apache uses . htaccess ...Install Nginx · Set Up Test Nginx... · Configure Php-Fpm
  80. [80]
    NGINX vs Apache: Picking Best Web Server for Your Business
    NGINX is better for high traffic and scalability. However, Apache's architecture means it can handle stable projects that need more compatibility. 2.NGINX vs Apache: An overview · Feature comparisons of...
  81. [81]
    Web Server Software List and Servers Comparison | Linode Docs
    Dec 3, 2021 · NGINX (pronounced “Engine X”) is considered by many to be a preferred alternative to Apache. It's a free and open source web server with a wide ...
  82. [82]
    Lighttpd vs Nginx: Powerful Web Server Comparison Guide
    Aug 11, 2025 · If you require high concurrency handling, load balancing, or the ability to handle both static and dynamic content, Nginx is the better choice.
  83. [83]
    Caddy vs Nginx on VPS in 2025: HTTP/3 Performance, TLS ... - Onidel
    Aug 31, 2025 · The choice between Caddy and Nginx in 2025 depends largely on your specific requirements and operational priorities. Nginx remains the champion ...
  84. [84]
    Nginx vs Caddy in 2025: Which is Better for Performance and TLS ...
    Edge & CDN: Nginx is still the king for custom reverse proxy tweaks, but Caddy's HTTP/3 and built-in TLS make it a rising star for fast, modern edge setups.Why Nginx Vs Caddy Matters... · Nginx Vs Caddy: The Big... · Nginx: Step-By-Step
  85. [85]
    What is the Difference Between CDN and Nginx? AWS Cloudfront vs ...
    While the Nginx server works faster than other static servers but it does not mean it can work without CPU, RAM, or hard-drive space at all. The common scenario ...
  86. [86]
    Difference between Cloudflare CDN and NGINX - Stack Overflow
    Jun 11, 2018 · You might still use nginx as your origin server. By putting Cloudflare in front of it, you can make your site faster and more secure, but ...Cloudflare and nginx: Too many redirects - Stack OverflowCloudflare SSL cert throws a Security Issue - Stack OverflowMore results from stackoverflow.comMissing: 2010s | Show results with:2010s
  87. [87]
    Migrating from ingress-nginx to NGINX Ingress Controller, Part 1
    Jul 31, 2025 · F5 NGINX is also fully committed to the Kubernetes Gateway API, delivering the highly performant and enterprise-ready NGINX Gateway Fabric.Community Vs. F5 Nginx... · Ingress-Nginx · Nginx Ingress Controller
  88. [88]
    Kubernetes NGINX Ingress: 10 Useful Configuration Options - vCluster
    Aug 1, 2024 · NGINX Ingress Controller manages routing rules in Kubernetes, handling traffic redirection and SSL configurations to improve app performance.
  89. [89]
    Universal web app server — NGINX Unit
    The latest version is 1.34.2, released on Feb 26, 2025. See a quickstart guide on our GitHub page. Browse the changelog, see the release notes in ...Key features · Unit 1.34.2 Released · Installation · Configuration
  90. [90]
    NGINX Unit - universal web app server - GitHub
    Oct 8, 2025 · A lightweight and versatile open source server that simplifies the application stack by natively executing application code across eight different programming ...
  91. [91]
    Introducing NGINX Unit - NGINX Community Blog
    Sep 22, 2017 · The architecture is quite complex, so I'll elaborate on the most important parts. The key feature of Unit is dynamic configuration. The ...
  92. [92]
    Key features - NGINX Unit
    NGINX Unit · v. 1.34.2 · About; Key features. Flexibility; Performance; Security & robustness; Supported app languages. News · Installation · Control API ...<|control11|><|separator|>
  93. [93]
    Application Isolation with NGINX Unit
    Nov 7, 2019 · Application isolation in NGINX Unit includes support for UID and GID mapping which can be configured if credential isolation is enabled (meaning ...Missing: SSL | Show results with:SSL
  94. [94]
    Configuration - NGINX Unit
    The /config section of the control API handles Unit's general configuration with entities such as listeners, routes, applications, or upstreams.
  95. [95]
    Unit 1.34.2 Released - NGINX Unit
    We are pleased to announce the release of NGINX Unit 1.34.2. This is a maintenance release that fixes a couple of issues in the Java WebSocket code.
  96. [96]
    Changelog | NGINX Documentation
    Changes with Unit 1.34.2 26 Feb 2025 Security: fix missing websocket payload length validation in the Java language module which could lead to Java language ...
  97. [97]
    Announcing NGINX Plus R15 - F5
    The zone_sync_server directive identifies the other NGINX Plus instances in the cluster. NGINX Plus supports DNS service discovery so cluster members can be ...Nginx Plus R15 Features In... · State Sharing Across A... · Enhancements To The Nginx...
  98. [98]
    F5 NGINX Plus | NGINX Documentation
    In addition to its HTTP server capabilities, NGINX can also function as a proxy server for email (IMAP, POP3, and SMTP) and a reverse proxy and load balancer ...Admin Guide · Installing NGINX Plus · How NGINX Plus Performs... · Releases
  99. [99]
    [PDF] F5 NGINX Plus: Application Delivery for your Modern Apps and APIs
    NGINX One brings together the features of NGINX Plus,. NGINX App Protect, and NGINX Kubernetes and management solutions into a single, easy- to-consume package.
  100. [100]
    Announcing NGINX Plus R12 - F5
    Mar 14, 2017 · NGINX Plus R10 introduced native JSON Web Token (JWT) support for OAuth 2.0 and OpenID Connect use cases. One of the primary use cases is for ...Missing: analytics | Show results with:analytics
  101. [101]
    Dynamic Configuration of Upstreams with the NGINX Plus API
    Dynamically reconfigure the servers in an F5 NGINX Plus upstream group using the NGINX Plus API, without reloading configuration or restarting processes.Missing: automation | Show results with:automation
  102. [102]
    Load Balancing Kubernetes Services with NGINX Plus - F5
    This post shows how to use NGINX Plus as an advanced Layer 7 load-balancing solution for exposing Kubernetes services to the Internet.Configuring The Nginx Plus... · Configuring Nginx Plus · Creating A Simple Kubernetes...
  103. [103]
    [PDF] NGINX PlusEnterprise - Subscription Licensing - F5
    response time for NGINX and NGINX Plus. NGINX Plus enterprise subscription licensing is available in three tiers: About NGINX. NGINX helps businesses deliver ...