OpenResty
OpenResty is a dynamic web platform that bundles an enhanced version of the Nginx core with LuaJIT, a just-in-time compiler for the Lua programming language, along with numerous Lua libraries, high-quality third-party Nginx modules, and their external dependencies.[1][2] This integration transforms the Nginx web server into a powerful application server capable of handling scalable web applications, dynamic gateways, and high-concurrency services through Lua scripting.[1][2] Created in 2007 by Yichun Zhang (known as agentzh), OpenResty originated as an open-source project and has evolved into a widely adopted solution, powering over 85 million live websites and nearly 40 million active domains, ranking as the fourth most used web server globally with a 14.31% domain market share (as of August 2025).[3][2][4]
Unlike a traditional fork of Nginx, OpenResty incorporates the official Nginx codebase while applying custom patches—many of which are upstreamed to the Nginx project—and continuously integrates the latest features and bug fixes from both Nginx and LuaJIT.[1] Key components include the standard Nginx core for event-driven I/O, an enhanced LuaJIT for efficient scripting, and modules such as those for non-blocking interactions with databases like MySQL and Redis, enabling applications to process 10,000 to over 1,000,000 connections per server.[1][5] It supports a range of use cases, from dynamic web portals and API gateways to web application firewalls, and is deployed across large-scale environments like those at companies including Alibaba and Cloudflare, where it serves billions of requests daily on minimal hardware.[3][1]
OpenResty is licensed under the 2-clause BSD license and maintained through an active open-source community on GitHub, with over 13,000 stars and contributions from hundreds of developers.[2][3] Commercial support and enterprise tools, such as OpenResty Edge for edge computing and OpenResty XRay for performance profiling, are provided by OpenResty Inc., founded by Zhang to address production-scale web challenges.[3] The platform's design emphasizes performance and extensibility, allowing developers to embed custom Lua code directly into Nginx configurations for tasks like request routing, caching, and security without sacrificing speed.[1][2]
Overview
Definition and Purpose
OpenResty is a full-fledged web platform that integrates an enhanced version of the Nginx core with LuaJIT, carefully selected Lua libraries, and numerous third-party Nginx modules, along with their external dependencies. Unlike a fork of Nginx, OpenResty assembles these components into a cohesive bundle, with patches contributed back to the upstream Nginx project to ensure compatibility and upstream integration.[1][2]
The primary purpose of OpenResty is to enable developers to construct scalable web applications, robust web services, and dynamic web gateways capable of handling high-concurrency workloads, such as API backends and real-time data processing systems. By embedding Lua scripting directly into the Nginx event-driven architecture, it supports non-blocking I/O operations for efficient resource utilization in demanding environments.[1][6]
Created by Yichun "agentzh" Zhang, OpenResty is written primarily in C, with the bundle licensed under a 2-clause BSD open-source license. The latest stable release, version 1.27.1.2, was made available on April 3, 2025.[7][2][8]
Key Components
OpenResty is built upon several core components that integrate seamlessly to enable dynamic web application development. At its foundation is an enhanced version of the standard Nginx core, which provides the scalable event-driven architecture while incorporating patches for Lua integration, allowing Lua scripts to interact directly with Nginx's processing pipeline.[5] This core is extended by LuaJIT, a just-in-time compiler for Lua 5.1, which delivers high-performance execution of Lua code within the Nginx environment, ensuring efficient handling of complex scripting tasks without compromising the server's responsiveness.[5] Complementing these are select Lua libraries, such as lua-resty-core, a pure Lua library that reimplements parts of the Lua Nginx module's API using LuaJIT's Foreign Function Interface (FFI) for optimized access to Nginx features, promoting interoperability between Lua scripts and core server operations.[9]
Among the bundled Nginx modules, the lua-nginx-module stands out as the primary enabler, embedding the Lua interpreter into Nginx to allow scripting at various request phases, thereby facilitating dynamic content generation and real-time decision-making across components.[5] Additional modules like ngx_srcache provide caching capabilities that leverage shared resources for faster response times, while ngx_drizzle and ngx_postgres offer database connectivity to MySQL-compatible and PostgreSQL backends, respectively, allowing Lua scripts to query databases non-blockingly and integrate data flows directly into the web serving process.[5] These modules interoperate through Lua's extensible nature, enabling developers to chain operations such as caching database results or injecting dynamic logic into static Nginx configurations.
Most components in OpenResty are enabled by default upon installation, providing a ready-to-use platform, though users can disable or customize them during compilation using specific configure flags, such as --without-http_lua_module to exclude Lua support.[5] For optional modules like ngx_drizzle and ngx_postgres, explicit enablement is required via flags like --with-http_drizzle_module.[5] This configurability ensures flexibility while maintaining the bundled ecosystem's cohesion.
A key interoperability feature is the support for shared memory zones, which allow data sharing across all Nginx worker processes in a server instance, using directives like lua_shared_dict to allocate zones for storing Lua tables or other structures, thus enabling efficient inter-worker communication for tasks like session management or global state maintenance without external dependencies.[10]
History
Origins
OpenResty originated in October 2007 at Yahoo! China, where Yichun Zhang (agentzh) developed it as a Perl-based RESTful web framework designed to handle dynamic requests for an Open API platform.[11] This initial version, distributed via CPAN, provided a general-purpose web service platform supporting features like database interfaces, caching, and CAPTCHA generation to facilitate RESTful services in web applications.[12] The framework addressed the need for flexible, dynamic handling of API requests in production environments, overcoming the constraints of purely static web server setups.[13]
In September 2009, following Zhang's move to Taobao (part of Alibaba Group), the project was redeveloped as ngx_openresty to better suit high-traffic e-commerce demands.[11] This iteration shifted from Perl to an Nginx core integrated with LuaJIT, enabling superior performance for processing massive dynamic workloads, such as those supporting millions of daily user interactions on Taobao's platform.[14] The LuaJIT embedding was selected for its just-in-time compilation capabilities, which enhanced scripting speed in real-time web serving.[7]
Early development and sponsorship came from Yahoo! China and Taobao until 2011, during which ngx_openresty evolved as an internal tool for scalable web services.[7] In June 2011, it transitioned to a fully open-source project under the name OpenResty, broadening its availability beyond proprietary use and emphasizing its role in extending Nginx for dynamic, production-grade REST API deployments.[7] The core motivation remained solving the rigidity of static Nginx configurations, allowing inline scripting to manage complex, high-concurrency scenarios without external dependencies.[15]
Development Milestones
In 2011, the project was renamed OpenResty from its prior moniker ngx_openresty and fully open-sourced, coinciding with the establishment of its primary GitHub repository to facilitate broader collaboration.[2][7] This transition marked a pivotal shift toward independent development following earlier sponsorship by Yahoo! China and Taobao.com, with subsequent support from Cloudflare Inc. from 2012 to 2016.[7]
By 2014, OpenResty's popularity surged, with annual GitHub downloads exceeding 550,000, reflecting its growing adoption among developers.[15] That year also saw the introduction of community-driven Lua modules, expanding the platform's extensibility through user-contributed extensions.[15]
In late 2015, the inaugural OpenResty Con conference was held in Beijing, China, fostering direct engagement among users, contributors, and creators to discuss advancements and applications.[15]
The year 2017 brought organizational maturation with the formation of OpenResty Inc., providing commercial support and enterprise solutions while sustaining the open-source project.[15] This entity, founded by Yichun Zhang, enabled dedicated resources for maintenance and innovation.[16]
In 2020, OpenResty released its 10-year community report, underscoring significant ecosystem growth, including over 5,000 pull requests across 69 GitHub repositories and widespread integration in production environments.[15]
Major releases progressed steadily from the 1.7.x series in 2014, which introduced enhanced module bundling, to the 1.27.x series by 2025, incorporating upstream Nginx integrations and security patches.[17][18] For instance, version 1.27.1.2, released on March 14, 2025, addressed vulnerabilities and upgraded core components like LuaJIT and OpenSSL.[19]
Ongoing maintenance is led by Yichun Zhang, with contributions from a global community via GitHub, ensuring regular updates for compatibility and performance.[7][15]
Architecture
Core Integration
OpenResty centers its architecture around an enhanced version of the Nginx core, which serves as the event-driven foundation for handling HTTP requests and responses. Nginx employs a master-worker process model, where the master process manages worker processes responsible for processing client connections, ensuring efficient resource utilization and scalability without inter-worker locking. OpenResty extends this model by introducing a privileged agent process that operates with master-level permissions to perform high-privilege tasks, such as binary hot upgrades, thereby enhancing deployment flexibility.[6][20]
Lua scripts integrate seamlessly into the Nginx pipeline through configuration directives specified in nginx.conf, allowing developers to inject custom logic at precise points in the request lifecycle. Key directives, such as content_by_lua_block, rewrite_by_lua_block, and access_by_lua_block, enable Lua code execution during specific phases, including rewrite (for URL manipulation), access (for authentication and authorization), and content (for generating response bodies). These integration points allow Lua code to interact directly with Nginx's internal structures, transforming static configurations into dynamic behaviors while maintaining the server's core efficiency.[10][21]
The non-blocking I/O model in OpenResty builds upon Nginx's asynchronous event-driven architecture, utilizing system-level mechanisms like epoll on Linux or kqueue on BSD systems to manage connections without thread-blocking operations. This enables efficient handling of concurrent requests and asynchronous interactions with upstream services, such as databases or caches, through Lua coroutines that yield control back to the event loop during I/O waits. Lua scripts executed via the embedded LuaJIT interpreter further support this model by providing non-blocking APIs for subrequests and sockets, ensuring the runtime environment remains responsive under high load.[6][21]
Memory management in OpenResty facilitates data sharing across worker processes via zone-based shared dictionaries, declared using the lua_shared_dict directive in the HTTP context of nginx.conf. These dictionaries allocate fixed-size shared memory zones (e.g., lua_shared_dict dogs 10m) that persist data like caches or session states accessible from any worker, avoiding the need for external storage while minimizing synchronization overhead. Access occurs through the ngx.shared API in Lua, enabling atomic operations for thread-safe updates in a multi-worker setup.[10]
During compilation from source, OpenResty bundles its core components—including Nginx, LuaJIT, and various modules—using the ./configure script with options like --with-luajit (enabled by default), --add-module=PATH for third-party Nginx modules, and --with-pcre-jit for performance optimizations. The process involves extracting the source tarball, running ./configure [options], followed by make and make install, resulting in a cohesive binary that integrates all elements without requiring separate installations.[22]
LuaJIT Embedding
LuaJIT serves as the primary Lua interpreter embedded within OpenResty, functioning as a just-in-time (JIT) compiler for Lua 5.1 that delivers performance approaching native C code through advanced optimizations like trace compilation and a high-speed assembler.[23] This enhancement enables efficient execution of Lua scripts in high-throughput server environments, maintaining full API and ABI compatibility with standard Lua 5.1 while supporting multiple architectures including x86, ARM, and MIPS.[23]
The embedding of LuaJIT into Nginx occurs primarily through the lua-nginx-module, a core component of OpenResty that integrates the interpreter directly into the Nginx core.[10] This module allows Lua code to execute across various Nginx processing phases, such as initialization (init_by_lua_block), rewriting (rewrite_by_lua_block), access control (access_by_lua_block), content generation (content_by_lua_block), header filtering (header_filter_by_lua_block), body filtering (body_filter_by_lua_block), and logging (log_by_lua_block).[10] By leveraging Lua coroutines as lightweight threads, the module synchronizes Lua execution with Nginx's event-driven model, enabling non-blocking operations like cosockets for network I/O without disrupting the server's asynchronous architecture.[21]
Lua scripts can be loaded either inline within Nginx configuration files using block directives like content_by_lua_block { ... }, which embed the code directly, or from external files via file-based directives such as content_by_lua_file /path/to/script.lua.[10] The lua_package_path directive further extends this by specifying directories for loading Lua modules, mimicking Lua's standard module search path and allowing reusable code organization across the server's Lua environment.[10] These mechanisms ensure flexibility in deployment, with external loading particularly useful for maintaining large scripts outside configuration files.
A key feature of this embedding is LuaJIT's Foreign Function Interface (FFI), which permits Lua scripts to directly invoke C functions and access C data structures without wrappers or extensions. In the context of OpenResty, the FFI enables seamless integration with Nginx's C APIs and third-party libraries, such as performing low-level socket operations or cryptographic computations, thereby extending Lua's capabilities for performance-critical tasks while avoiding the overhead of traditional Lua C modules.[10]
To suit server environments with long-running processes, OpenResty's LuaJIT integration includes tuning options for garbage collection to reduce memory pauses and fragmentation. The lua_malloc_trim directive, for instance, automatically releases cached memory back to the operating system every specified number of requests (default: 1000), preventing gradual memory bloat in worker processes.[24] Additionally, the lua_thread_cache_max_entries directive manages a cache of Lua thread objects per worker (default: 1024), recycling coroutines to minimize allocation overhead and GC pressure during frequent request handling.[25] These configurations help maintain stable performance under load by aligning Lua's memory management with Nginx's pooled allocation strategy.
Features
Dynamic Scripting
OpenResty enables dynamic scripting through the integration of Lua code into Nginx configurations, allowing developers to inject custom logic directly into various processing phases of HTTP requests. This capability is provided by the ngx_lua module, which embeds LuaJIT and exposes Nginx's internal APIs to Lua scripts for non-blocking operations.[10]
Lua scripts are embedded using specific directives in the nginx.conf file, such as access_by_lua, which executes code during the access phase to implement authentication or authorization checks, like verifying user credentials before allowing request processing. Similarly, content_by_lua runs in the content phase to generate dynamic responses, such as assembling JSON data from backend services on the fly. These directives support both inline Lua blocks and file-based inclusion for modular script management.[26][27]
To handle common tasks like data storage and retrieval, OpenResty leverages Lua libraries such as lua-resty-redis, a non-blocking Redis client that facilitates caching operations by connecting to Redis servers via cosockets and supporting commands like SET and GET for storing session data or API responses. For database interactions, lua-resty-mysql provides a non-blocking MySQL driver, enabling Lua scripts to perform queries, such as selecting user records, while maintaining Nginx's event-driven model without blocking worker processes.[28][29]
Dynamic module loading in OpenResty scripts is achieved using Lua's require() function within ngx_lua contexts, allowing libraries to be loaded on-demand during request processing; for instance, a script can require a utility module only when conditional logic dictates its use, optimizing memory footprint.[30]
Error handling in these scripts employs Lua's pcall() to catch runtime exceptions safely, preventing crashes and enabling custom recovery logic that integrates with Nginx's error_page directives to serve fallback responses, such as redirecting to an error handler on database connection failures.[31]
For security, OpenResty sandboxes Lua execution in independent global environments per request, isolating scripts to mitigate risks from untrusted code and limiting access to sensitive system resources unless explicitly granted via APIs.[10]
OpenResty achieves high-throughput and low-latency performance by integrating Lua scripting into Nginx's core event-driven model, which enables asynchronous handling of thousands to over a million concurrent connections per server through non-blocking I/O operations. This event loop processes events efficiently without blocking on individual requests, allowing workers to manage extensive workloads while maintaining responsiveness. By embedding Lua coroutines via cosockets and subrequests, OpenResty ensures that script execution aligns seamlessly with Nginx's asynchronous paradigm, minimizing latency in dynamic web applications.[21]
A key optimization is the srcache-nginx-module, which implements transparent caching for responses from upstream servers or static files, thereby reducing redundant backend invocations and accelerating content delivery. This module stores cached data in backends like Memcached via the memc-nginx-module, supporting configurable cache keys, TTLs, and eviction policies to balance memory usage and hit rates. In practice, srcache can cut upstream traffic by caching frequently requested resources, leading to substantial reductions in response times for read-heavy workloads.[32]
Connection pooling further enhances efficiency in upstream interactions, particularly through libraries like lua-resty-http, which reuse TCP connections across requests to avoid the overhead of repeated handshakes. This cosocket-based client generates pool names based on host, port, and SSL settings, enabling persistent connections that improve throughput for HTTP proxying or API calls. Combined with shared memory mechanisms, such as the lua_shared_dict directive, OpenResty facilitates lock-free data sharing across workers, reducing synchronization costs and boosting CPU utilization for inter-process communication.[33][34]
LuaJIT's just-in-time compilation, customized in OpenResty's branch with aggressive optimization flags like maxtrace=8000, delivers near-native execution speeds for Lua scripts, outperforming standard Lua interpreters by orders of magnitude in compute-intensive tasks. For monitoring these enhancements, OpenResty provides profiling via tools like OpenResty XRay, which analyzes bottlenecks in real-time, alongside community libraries for metrics collection to inform ongoing optimizations.[35][36][37]
Usage
Configuration Fundamentals
OpenResty utilizes a configuration system inherited from NGINX, with the primary configuration file named nginx.conf. This file organizes directives into hierarchical blocks such as http, server, and location, allowing administrators to define global settings, virtual servers, and specific URI handling rules. For instance, the http block encompasses server-wide configurations, while server blocks specify listening ports and hostnames, and location blocks match request URIs to apply targeted directives.[38]
Installation of OpenResty can be achieved through pre-built packages for various Linux distributions, such as using apt on Ubuntu or Debian and yum or dnf on CentOS, RHEL, or Fedora, which simplifies deployment without source compilation. Alternatively, users can download and compile from source, where the Lua NGINX module is bundled by default in official OpenResty releases, eliminating the need for separate --with-http_lua_module flags during builds. Once installed, OpenResty is started via the openresty command-line tool, which functions similarly to NGINX's nginx binary, typically by specifying the configuration file path, such as openresty -c /path/to/nginx.conf.[39][22][38]
To enable Lua scripting within the configuration, directives like content_by_lua_block are placed inside location blocks to embed inline Lua code for dynamic content generation or processing. The lua_code_cache on; directive, enabled by default, optimizes performance by caching compiled Lua bytecode, though it can be set to off during development for immediate code changes. Lua blocks, such as those using content_by_lua_block, integrate seamlessly into the NGINX processing phases without requiring additional module loading in standard OpenResty installations.[10][38]
Logging in OpenResty combines NGINX's built-in facilities with Lua-specific APIs for comprehensive error tracking and debugging. The error_log directive in nginx.conf specifies the log file path and verbosity level, such as error_log logs/error.log warn;, directing NGINX errors to a designated file. Within Lua scripts, the ngx.log function allows custom logging at various levels (e.g., ngx.log(ngx.ERR, "An error occurred");), which outputs to the same error log configured via the error_log directive.[38][40][10]
Configuration changes in OpenResty support graceful reloading to apply updates without interrupting active connections or causing downtime. This is accomplished using the command openresty -s reload (or equivalently nginx -s reload), which validates the new nginx.conf and spawns worker processes to adopt it seamlessly.[38]
Application Examples
OpenResty enables the development of dynamic web applications by embedding Lua scripting directly into Nginx configurations, allowing for efficient handling of HTTP requests and responses. One basic application is a simple HTTP responder that generates custom content, such as JSON data, without relying on external backends.[38]
For instance, the following configuration uses the content_by_lua_block directive to return a JSON object:
http {
server {
listen 8080;
location /api/hello {
content_by_lua_block {
local cjson = require "cjson"
ngx.header.content_type = "application/json"
ngx.say(cjson.encode({message = "[Hello, World](/page/Hello_World)!", status = "success"}))
}
}
}
}
http {
server {
listen 8080;
location /api/hello {
content_by_lua_block {
local cjson = require "cjson"
ngx.header.content_type = "application/json"
ngx.say(cjson.encode({message = "[Hello, World](/page/Hello_World)!", status = "success"}))
}
}
}
}
This setup leverages the built-in lua-cjson library to encode Lua tables into JSON format, ensuring a lightweight response for API endpoints.[38]
Database integration is another common use case, where OpenResty queries external databases during the request processing phases to enforce security or retrieve data. The lua-resty-mysql library provides a non-blocking interface for MySQL interactions, suitable for authentication in the access phase. For example, the configuration below checks user credentials against a MySQL database before allowing access:
http {
lua_shared_dict auth_cache 1m;
server {
listen 8080;
location /protected {
access_by_lua_block {
local mysql = require "resty.mysql"
local db, err = mysql:new()
if not db then
ngx.log(ngx.ERR, "failed to instantiate mysql: ", err)
return ngx.exit(500)
end
db:set_timeout(1000) -- 1 sec
local ok, err = db:connect{
host = "127.0.0.1",
port = 3306,
database = "auth_db",
user = "user",
password = "pass"
}
if not ok then
ngx.log(ngx.ERR, "failed to connect: ", err)
return ngx.exit(500)
end
local username = ngx.var.arg_user
local password = ngx.var.arg_pass
local quoted_username = ngx.quote_sql_str(username)
local quoted_password = ngx.quote_sql_str(password)
local res, err = db:query("SELECT id FROM users WHERE username = " .. quoted_username .. " AND password = " .. quoted_password, 100)
if not res then
ngx.log(ngx.ERR, "bad result: ", err)
return ngx.exit(403)
elseif #res > 0 then
-- Auth successful
else
return ngx.exit(401)
end
db:close()
}
# Proxy to upstream if auth passes
proxy_pass http://backend;
}
}
}
http {
lua_shared_dict auth_cache 1m;
server {
listen 8080;
location /protected {
access_by_lua_block {
local mysql = require "resty.mysql"
local db, err = mysql:new()
if not db then
ngx.log(ngx.ERR, "failed to instantiate mysql: ", err)
return ngx.exit(500)
end
db:set_timeout(1000) -- 1 sec
local ok, err = db:connect{
host = "127.0.0.1",
port = 3306,
database = "auth_db",
user = "user",
password = "pass"
}
if not ok then
ngx.log(ngx.ERR, "failed to connect: ", err)
return ngx.exit(500)
end
local username = ngx.var.arg_user
local password = ngx.var.arg_pass
local quoted_username = ngx.quote_sql_str(username)
local quoted_password = ngx.quote_sql_str(password)
local res, err = db:query("SELECT id FROM users WHERE username = " .. quoted_username .. " AND password = " .. quoted_password, 100)
if not res then
ngx.log(ngx.ERR, "bad result: ", err)
return ngx.exit(403)
elseif #res > 0 then
-- Auth successful
else
return ngx.exit(401)
end
db:close()
}
# Proxy to upstream if auth passes
proxy_pass http://backend;
}
}
}
This example establishes a connection, executes a query for validation, and denies access if no matching user is found, all without blocking the event loop.[29]
OpenResty also supports caching mechanisms to optimize performance for both static and dynamic content by integrating with upstream services. The srcache-nginx-module enables transparent subrequest-based caching, often combined with memcached for distributed storage. A practical caching proxy configuration might look like this:
http {
upstream my_memcached {
server 127.0.0.1:11211;
}
[server](/page/Server) {
listen 8080;
[location](/page/Location) /cached {
srcache_fetch GET /memc $uri?$args;
srcache_methods GET HEAD;
srcache_store PUT /memc $uri?$args;
srcache_expires [300](/page/300);
add_header X-Srcache-Fetch-Status $srcache_fetch_status;
[location](/page/Location) /memc {
internal;
set_md5 $srcache_key $uri?$args;
memc_cmds_allowed GET SET;
memc_pass my_memcached;
memc_timeout 100ms;
}
# Fall back to upstream for cache misses
proxy_pass http://dynamic_upstream;
}
}
}
http {
upstream my_memcached {
server 127.0.0.1:11211;
}
[server](/page/Server) {
listen 8080;
[location](/page/Location) /cached {
srcache_fetch GET /memc $uri?$args;
srcache_methods GET HEAD;
srcache_store PUT /memc $uri?$args;
srcache_expires [300](/page/300);
add_header X-Srcache-Fetch-Status $srcache_fetch_status;
[location](/page/Location) /memc {
internal;
set_md5 $srcache_key $uri?$args;
memc_cmds_allowed GET SET;
memc_pass my_memcached;
memc_timeout 100ms;
}
# Fall back to upstream for cache misses
proxy_pass http://dynamic_upstream;
}
}
}
Here, requests are first checked against the cache; if missed, the upstream is invoked, and the response is stored for subsequent requests within the 300-second expiration window.[41]
In API gateway scenarios, OpenResty implements rate limiting to protect backends from abuse using the lua-resty-limit-traffic library. This module controls request rates based on keys like client IP, rejecting or delaying excess traffic. An example configuration for limiting to 200 requests per second with a burst of 100 is:
http {
lua_shared_dict my_limit_req_store 100m;
server {
listen 8080;
location /api {
access_by_lua_block {
local limit_req = require "resty.limit.req"
local lim, err = limit_req.new("my_limit_req_store", 200, 100)
if not lim then
ngx.log(ngx.ERR, "failed to instantiate limit req object: ", err)
return ngx.exit(500)
end
local key = ngx.var.binary_remote_addr
local delay, err = lim:incoming(key, true)
if not delay then
if err == "rejected" then
return ngx.exit(503)
end
ngx.log(ngx.ERR, "failed to limit req: ", err)
return ngx.exit(500)
end
if delay >= 0.001 then
ngx.sleep(delay)
end
}
proxy_pass http://api_backend;
}
}
}
http {
lua_shared_dict my_limit_req_store 100m;
server {
listen 8080;
location /api {
access_by_lua_block {
local limit_req = require "resty.limit.req"
local lim, err = limit_req.new("my_limit_req_store", 200, 100)
if not lim then
ngx.log(ngx.ERR, "failed to instantiate limit req object: ", err)
return ngx.exit(500)
end
local key = ngx.var.binary_remote_addr
local delay, err = lim:incoming(key, true)
if not delay then
if err == "rejected" then
return ngx.exit(503)
end
ngx.log(ngx.ERR, "failed to limit req: ", err)
return ngx.exit(500)
end
if delay >= 0.001 then
ngx.sleep(delay)
end
}
proxy_pass http://api_backend;
}
}
}
This enforces the limit during the access phase, allowing bursts while throttling sustained high traffic to maintain system stability.[42]
In production environments, OpenResty powers web application firewalls (WAFs) by embedding custom Lua logic for threat detection and mitigation at the edge. For example, Cloudflare utilized OpenResty-based Lua scripting in its WAF to compile and execute rules efficiently for global traffic protection.[1][43] It also serves as a backend for mobile applications, handling high-concurrency web services and dynamic gateways that support millions of users, as seen in platforms requiring scalable API orchestration.[1]
Ecosystem
The OpenResty community plays a vital role in extending the platform's capabilities through open-source contributions, including Lua modules, documentation, and tools that enhance its ecosystem. Community members develop and maintain packages that integrate seamlessly with OpenResty, fostering innovation in areas like SSL certificate automation and performance optimization. This collaborative effort has grown the platform's adoption, with contributions managed through structured channels that encourage participation from developers worldwide.[44]
A key aspect of community involvement is the OpenResty Package Manager (OPM), which serves as the official repository for community-contributed Lua modules and libraries. OPM functions similarly to package managers like CPAN or npm, allowing users to install and manage third-party packages via a command-line utility. For instance, the lua-resty-auto-ssl module, which automates Let's Encrypt SSL certificate issuance and renewal within OpenResty, is distributed through OPM and has been downloaded extensively for enabling secure HTTPS configurations on the fly.[45][46]
Contributions occur primarily through GitHub repositories, where developers submit pull requests and report issues for core components like the lua-nginx-module. The OpenResty organization maintains over 60 active repositories, with more than 1,100 pull requests across the top 10 alone, reflecting robust community engagement in code reviews and enhancements. Additionally, mailing lists facilitate discussions: the English list at [email protected] handles technical queries and announcements, while the Chinese list at [email protected] supports a large portion of the global user base.[47][15][44]
Community events and educational resources further strengthen participation. OpenResty Con, the project's conference, was first held in late 2015 in Beijing and followed by an edition in 2016 in Shenzhen, where developers shared advancements in Lua integration and Nginx extensions. Video tutorials, available on the official OpenResty YouTube channel, cover topics from basic setup to advanced debugging, aiding newcomers and experienced users alike. Online forums, such as Stack Overflow's openresty tag, which has 465 questions and answers, provide ongoing support for troubleshooting and best practices.[15][48][49][50]
OpenResty powers nearly 40 million active domains and over 85 million websites globally, representing about 6.89% of all sites as of mid-2025, underscoring the community's impact on scalable web infrastructure. The project's loose organization emphasizes self-contained, independent development of components, allowing contributors to focus on modular improvements without centralized oversight. This approach has sustained a vibrant ecosystem, with guidelines encouraging high-quality, compatible submissions to repositories and OPM.[4][51]
OpenResty Edge is a distributed traffic management platform developed by OpenResty Inc., designed for setting up dynamic load balancers and reverse proxy clusters to handle high-traffic web applications.[52] Released in 2023, it extends the core OpenResty capabilities with features like global server load balancing (GSLB) and automated failover, enabling users to build private content delivery networks (CDNs) and edge computing solutions without relying on proprietary hardware. This platform supports deployment across various Linux distributions, including Ubuntu, Debian, and CentOS, and integrates Lua scripting for custom traffic routing logic.[53]
OpenResty XRay serves as a dynamic tracing and monitoring tool for performance profiling, memory analysis, and real-time troubleshooting in OpenResty-based applications.[54] It employs noninvasive techniques to capture application traces, identify bottlenecks, and optimize resource usage, such as detecting memory leaks or slow Lua code execution paths.[55] Available in on-premises and cloud editions, XRay includes standard analyzers for web stacks and supports integration with OpenResty's LuaJIT environment to provide insights without code modifications.[56]
The lua-resty-* family of libraries comprises official and endorsed Lua modules bundled with or distributed via OpenResty's package manager (OPM), enhancing functionality in areas like core API access and security.[9] For instance, lua-resty-core provides FFI-based interfaces to NGINX internals for efficient SSL session management and variable manipulation.[57] Similarly, lua-resty-openidc implements OpenID Connect relying party and OAuth 2.0 resource server protocols, enabling secure authentication flows in OpenResty deployments.[58]
Among derivative projects, Apache APISIX stands out as a Kubernetes-native API gateway built directly on OpenResty, leveraging its NGINX core and LuaJIT for high-performance traffic management.[59] APISIX extends OpenResty with dynamic routing, plugin-based extensibility, and service mesh integration, making it suitable for microservices architectures while inheriting OpenResty's scalability for handling billions of requests daily.[60]
Commercial support for OpenResty and its extensions is provided through OpenResty Inc., offering enterprise-grade services for deployment, optimization, and maintenance in production environments.[3] This includes tailored solutions for troubleshooting via XRay, scaling with Edge, and integrating lua-resty libraries, serving global customers who process massive traffic volumes.[1]