Linux Virtual Server
The Linux Virtual Server (LVS) is a free and open-source load balancing software project that integrates into the Linux kernel to enable the construction of highly scalable and highly available server clusters, allowing multiple real servers to operate as a single virtual server for distributing network traffic such as web, mail, or VoIP services.[1][2] LVS achieves this through its core component, the IP Virtual Server (IPVS) subsystem, which uses kernel-level packet processing to dispatch incoming requests to backend real servers based on configurable scheduling algorithms, while maintaining transparency to clients.[2] The project supports three primary load balancing methods—VS/NAT (Network Address Translation), VS/TUN (IP tunneling), and VS/DR (Direct Routing)—each optimized for different network topologies and scalability needs, with VS/DR offering the highest performance by avoiding packet rewriting overhead.[3] Initiated in 1998 by Wensong Zhang at the National University of Defense Technology in China, LVS was designed to address the limitations of single-server architectures in handling growing internet workloads, evolving from early prototypes into a mature kernel module adopted in mainline Linux distributions.[1] Key milestones include the addition of IPv6 support in Linux kernel 2.6.28 (2008) and advanced features like FULLNAT for bidirectional NAT and SYNPROXY for DDoS protection in kernel 2.6.32 (2012).[1] The project is licensed under the GNU General Public License (GPL) and includes user-space tools like ipvsadm for configuring virtual services.[1] LVS clusters emphasize high availability through mechanisms like health checks on real servers and automatic failover reconfiguration, enabling seamless scaling by adding or removing nodes without service interruption.[2] Notable for its efficiency, LVS can handle over 1 Gbps throughput in tunneling mode and supports hundreds of backend servers, making it a foundational technology for large-scale deployments in enterprise environments, including integrations with tools like Keepalived for VRRP-based redundancy.[3] Its architecture separates the load balancer (often called LinuxDirector) from the real servers, which can be connected via LAN or WAN, ensuring reliability and performance in diverse setups.[3]Introduction
Definition and Purpose
The Linux Virtual Server (LVS) is a free, open-source load balancing software integrated into the Linux kernel, enabling the distribution of IP traffic across multiple real servers via a shared virtual IP address.[1] This architecture allows a cluster of commodity servers to function as a single, unified virtual server, providing transparent scalability without altering client-side configurations.[4] The primary purpose of LVS is to construct highly scalable and highly available server clusters for demanding network services, including web hosting, email, and VoIP applications.[1] By efficiently routing incoming requests using methods such as Network Address Translation (NAT), IP Tunneling (TUN), and Direct Routing (DR), LVS supports environments capable of handling high volumes of requests, ensuring reliability and performance under heavy loads through the addition of real servers as needed.[5] At its core, LVS employs a director-based load balancing model, where a front-end director processes client connections and forwards them to back-end real servers, all while maintaining session transparency and without requiring modifications to application code.[6] This approach leverages the Linux kernel's networking stack to achieve low-latency, high-throughput distribution, making it suitable for large-scale deployments.[4] LVS has been licensed under the GNU General Public License (GPL) version 2 since its inception as an open-source project initiated by Wensong Zhang in 1998.[7][1]Key Features
Linux Virtual Server (LVS) provides robust support for core transport protocols including TCP, UDP, and SCTP, enabling efficient load balancing for a wide range of network services. Additionally, it handles specialized protocols such as FTP through integration with Netfilter hooks in the Linux kernel, allowing seamless management of connection-oriented traffic without disrupting standard operations.[8][9] A key advantage of LVS is its high scalability, demonstrated by its ability to support millions of concurrent connections on commodity hardware with sufficient memory, making it suitable for large-scale deployments.[5] This scalability stems from its cluster-based architecture, which distributes load across multiple real servers while minimizing bottlenecks at the director node.[3] LVS operates with complete transparency to clients and backend servers, requiring no alterations to client-side requests or server configurations; it employs a virtual IP (VIP) to present the cluster as a single entity.[2] Packet processing occurs directly at the kernel level via the IP Virtual Server (IPVS) module, ensuring low latency and reduced CPU overhead compared to user-space alternatives.[10] As an open-source solution under the GNU General Public License, LVS facilitates customization and integration with other Linux tools, such as iptables and keepalived, to enhance flexibility in diverse environments.History and Development
Origins
The Linux Virtual Server (LVS) project was initiated by Wensong Zhang in May 1998 while he was a researcher at the National Laboratory for Parallel & Distributed Processing in Changsha, Hunan, China.[5][11] This development occurred amid the rapid expansion of the internet in the late 1990s, where the demand for scalable and highly available web services outpaced the capabilities of single servers. Zhang's work was motivated by the need to leverage inexpensive commodity hardware in clusters to provide reliable network services, drawing inspiration from ongoing research in parallel and distributed computing systems.[5] The project sought to create an open-source framework for load balancing that could support high-performance virtual servers without relying on proprietary hardware solutions.[7] The initial implementation of LVS involved kernel-level extensions to the Linux TCP/IP stack to enable efficient layer-4 switching, complemented by user-space tools for managing load distribution.[5] The first public release in 1998 introduced prototypes that were quickly adopted in academic and research settings, including early deployments for high-traffic sites such as www.linux.com and sourceforge.net.[5] Subsequent efforts focused on refining the system for broader integration, culminating in its inclusion in the Linux kernel.[5]Milestones and Releases
A stable version of the IP Virtual Server (IPVS) patch (1.0.8), the core load balancing component of Linux Virtual Server (LVS), was released for the Linux kernel 2.2 series on May 14, 2001, providing support for high-performance transport-layer load balancing without requiring additional user-space modules.[12] This enabled LVS to scale internet services efficiently on commodity hardware, marking a shift from earlier prototypes. The first mainline integration of IPVS into the Linux kernel occurred in version 2.4.28 in November 2004, further stabilized with bug fixes in IPVS version 1.0.12.[12] In December 2004, IPVS version 1.2.1 became the stable release integrated into kernel 2.6.10, introducing enhancements to the Netfilter module for improved packet processing and persistence mechanisms to maintain session affinity across connections.[12][13] Around the same period, development of KTCPVS (Kernel TCP Virtual Server), an extension for application-level (Layer-7) load balancing within the kernel, began in May 2000, with version 0.0.17 released on December 8, 2004, featuring stalled connection collection and tool fixes; however, active development ceased after 2004, with the final release (0.0.18) on December 18, 2004.[14][13] Adoption of LVS grew in the mid-2000s among large-scale deployments, including the Wikimedia Foundation, which has utilized it for load balancing incoming requests on commodity servers as part of its core infrastructure.[15] IPv6 support was added to IPVS in kernel 2.6.28-rc3 on November 2, 2008, extending compatibility to next-generation networks and enabling balanced IPv6 traffic distribution.[12] In kernel 2.6.32 (2012), IPVS gained advanced features including FULLNAT for bidirectional network address translation and SYNPROXY for DDoS protection.[1] As of 2025, IPVS remains an integral part of the Linux kernel through the 6.x series, with ongoing maintenance emphasizing stability rather than major overhauls; minor optimizations in kernels 5.10 and later (released from December 2020 onward) leverage broader multi-core and networking enhancements for improved scalability, though no significant new IPVS-specific features have emerged post-2020.[16] LVS's IPVS continues to underpin modern container orchestration, such as Kubernetes' in-cluster load balancing since its general availability in 2018.Architecture
Core Components
The Linux Virtual Server (LVS) architecture revolves around a modular structure that enables scalable load balancing within the Linux ecosystem. At its core, the system comprises a director node, real servers, and a virtual server, which collectively provide a transparent clustering mechanism for handling high-volume network traffic.[17] This forms a three-tier architecture, with the third tier being shared storage that ensures consistent data access across the real servers, such as through distributed file systems (e.g., NFS, GFS) or databases for dynamic content.[17] The director node serves as the front-end load balancer, positioned as the single entry point for client requests in the cluster. It receives incoming connections and distributes them across back-end resources using kernel-level processing to ensure low latency and high throughput, supporting configurations that can manage millions of concurrent connections.[5] The director integrates with Linux kernel modules, such as Netfilter, for efficient packet interception, rewriting, and routing, allowing seamless forwarding without user-space involvement.[12] Real servers form the back-end cluster of nodes that perform the actual service processing, such as web hosting or database operations. These servers can be geographically dispersed and connected via standard networks like Ethernet, with the director directing traffic to them based on load distribution rules; scalability is achieved by adding nodes, where performance scales linearly up to hundreds in certain setups.[17][5] The virtual server acts as a logical entity that presents the entire cluster as a single, unified service accessible via a shared virtual IP (VIP) address. Clients interact with this VIP as if it were a standalone high-performance server, with the underlying complexity of load distribution hidden from view.[17] This abstraction is facilitated by the IP Virtual Server (IPVS) kernel module, which handles the transport-layer balancing.[12] For enhanced reliability, optional components like Keepalived or Ultramonkey can be integrated to provide high availability and failover capabilities. Keepalived, for instance, implements VRRP for director failover and multi-layer health checks, ensuring seamless transition if a primary director fails, while Ultramonkey supports service monitoring and reconfiguration in LVS clusters.[18][19]IP Virtual Server (IPVS)
The IP Virtual Server (IPVS) serves as the core kernel-based component of Linux Virtual Server (LVS), implementing transport-layer load balancing directly within the Linux kernel to distribute TCP, UDP, and SCTP traffic across multiple backend servers.[12] As a Netfilter module, IPVS operates at Layer 4 of the OSI model, enabling efficient packet forwarding without the overhead of user-space processing.[5] It maintains a virtual server table that maps incoming requests addressed to a virtual IP (VIP) to one of several real server IPs (RIPs), supporting scalable cluster architectures for high-availability services.[12] IPVS utilizes hash tables for rapid connection lookup and forwarding decisions, achieving constant-time O(1) complexity for packet processing even under high loads.[5] The connection hash table, keyed primarily on client IP address to reduce collisions, tracks active sessions with entries typically sized at 128 bytes, allowing the system to handle up to approximately 2 million concurrent connections on a machine with 256 MB of memory.[5] This design ensures low-latency forwarding while minimizing memory overhead, making IPVS suitable for environments with massive connection volumes.[12] IPVS supports three primary operation modes to accommodate diverse network topologies: Network Address Translation (NAT), Direct Routing (DR), and IP Tunneling (TUN). In NAT mode, the load balancer rewrites both source and destination IP addresses of packets, routing responses back through itself, which suits small clusters but limits scalability to around 20 servers due to translation bottlenecks.[5] DR mode modifies only the MAC address for direct packet delivery to real servers on the same LAN, enabling larger clusters of up to 100 servers without address rewriting overhead, provided non-ARP loopback is configured to avoid broadcast storms.[5] TUN mode encapsulates packets in IP tunnels for forwarding to remote real servers, supporting geographically distributed setups across up to 100 nodes while preserving high throughput.[5] Packets destined for the VIP are intercepted early in the network stack at the PRE_ROUTING hook of the Netfilter framework, where IPVS examines the destination IP to determine if it matches a virtual service. For new connections, typically initiated by SYN packets, IPVS applies a scheduling algorithm to select a real server and creates a new entry in the hash table, forwarding the packet to the corresponding RIP with mode-specific transformations (e.g., address rewriting in NAT or encapsulation in TUN).[5] Established connections bypass scheduling, with subsequent packets transparently routed to the same real server via the existing hash table entry, ensuring session persistence without additional overhead.[5] This flow supports both TCP state tracking and UDP datagrams, with configurable timeouts (e.g., 300 seconds for UDP) to manage resource cleanup.[12]Load Balancing Methods
Scheduling Algorithms
Linux Virtual Server (LVS) utilizes a suite of scheduling algorithms to efficiently distribute incoming network connections across real servers, enabling scalable load balancing tailored to diverse application needs. These algorithms are implemented within the IP Virtual Server (IPVS) kernel module and can be categorized as static (deterministic) or dynamic. Static algorithms rely on fixed rules independent of current server load, providing predictable distribution, while dynamic algorithms adapt based on real-time metrics like active connections or server weights to achieve better balance.[20] The original LVS implementation introduced four core algorithms, with additional ones developed subsequently to address specific scenarios such as locality-aware caching or minimal queuing.[5] Round-Robin (RR) is a static scheduling algorithm that sequentially assigns connections to real servers in a cyclic order, assuming all servers have equal processing capacity. It operates at the granularity of individual connections, offering finer control than methods like round-robin DNS. This approach ensures even distribution without requiring server state information, making it suitable for homogeneous clusters.[20][5] Weighted Round-Robin (WRR) extends RR by incorporating server weights that reflect relative capacities, directing more connections to higher-weighted servers. For instance, servers with weights of 4, 3, and 2 would receive connections in a pattern like A-A-B-A-B-C, proportional to their assigned values (default weight is 1). This static method enhances fairness in heterogeneous environments without dynamic monitoring.[20][5] Least Connection (LC) is a dynamic algorithm that routes new connections to the real server with the fewest active connections, aiming to equalize load based on current usage. It performs well in scenarios with persistent connections but can be affected by TCP TIME_WAIT states, which delay connection counts for up to two minutes. LC assumes uniform server capacities and adjusts in real time for balanced distribution.[20][5] Weighted Least Connection (WLC) builds on LC by factoring in server weights to normalize load distribution, sending connections to the server where the ratio of active connections to weight is minimized. This dynamic approach better handles varying server capacities, such as when a more powerful server is assigned a higher weight to process proportionally more traffic. It is widely used for its adaptability in production clusters.[20][5] Locality-Based Least Connection (LBLC) is a dynamic algorithm designed for locality-sensitive applications, such as web caches, where it directs connections for a given destination IP to the least-connected server within a defined set. If that server becomes overloaded, it shifts to the next least-connected option, promoting content locality while balancing load.[20] The Weighted LBLC variant incorporates server weights into the LBLC process, adjusting the least-connection selection to account for capacity differences and further optimizing for replicated cache setups.[20] Destination Hashing (DH) employs a static hash table keyed on destination IP addresses to consistently map connections to the same real server, ensuring deterministic routing for session persistence without state tracking. It is particularly useful for applications requiring sticky sessions based on client targets.[20] Source Hashing (SH) similarly uses a static hash on source IP addresses to assign connections predictably to servers, maintaining locality for clients from the same IP and supporting persistence in NAT environments.[20] Shortest Expected Delay (SED) dynamically selects the server offering the minimal expected response time, calculated by considering current connections and weights to predict delay as (active connections + 1) divided by server weight. This method prioritizes low-latency paths, making it effective for time-sensitive services.[20] Never Queue (NQ) first attempts to route connections to idle servers for zero delay; if none are available, it falls back to SED to minimize queuing. This dynamic algorithm is optimized for scenarios where avoiding wait times is critical, such as real-time applications.[20] Algorithm selection in LVS is performed using the ipvsadm tool with the -s flag to specify the scheduler method, allowing administrators to choose between deterministic and dynamic behaviors based on workload requirements. These schedulers integrate with persistence modes for session maintenance, as detailed in related sections.[20]Persistence and Health Checks
In Linux Virtual Server (LVS), persistence mechanisms ensure that subsequent connections from the same client are routed to the same real server, which is essential for stateful applications such as HTTPS sessions or FTP transfers that require session affinity. This is achieved through a persistent port feature that creates a connection template upon the client's initial access to the virtual service; the template, formatted as<client IP, 0, virtual IP, virtual port, real server IP, real server port>, stores the mapping in a kernel hash table to direct all related traffic consistently.[21]
Persistence can operate on a timeout basis, where templates expire after a configurable duration—defaulting to 300 seconds—or when all associated connections terminate, preventing indefinite binding and allowing load redistribution. For finer control, administrators apply a persistent netmask to the client IP for granularity, such as 255.255.255.0 to group clients behind a proxy as a single entity, ensuring balanced yet sticky routing across subnets. Source IP persistence relies on hashing the client IP to select and fix a real server, while advanced setups use firewall marks (fwmarks) to tag packets and enforce custom persistence rules based on additional criteria like protocol or port.[21][22]
Connection templates in persistent configurations handle non-SYN packets by referencing the initial template, maintaining affinity even for ongoing sessions without requiring full handshakes, which supports protocols like FTP that involve multiple ports. For example, in a VS/NAT setup for FTP, a template might bind a client to a real server across control (port 21) and data (port 20) connections, configurable via tools like ipvsadm with a persistent timeout of 540 seconds.[21]
Health checks in LVS monitor the availability of real servers to maintain cluster reliability, dynamically removing unhealthy nodes from the load-balancing pool to prevent traffic loss. External daemons such as ldirectord perform periodic probes—typically HTTP requests to a known URL—and instruct ipvsadm to update the IPVS table by quiescing or removing failed real servers if responses fail or timeout. Integrated tools like Keepalived extend this with a multi-layer health-checking framework, supporting TCP checks with configurable timeouts (e.g., 3 seconds) to detect failures and adjust server weights or exclude nodes accordingly.[23][19]
For failover, LVS integrates VRRP via Keepalived to enable seamless director transitions without service interruption; if the primary load balancer fails health checks, the backup assumes the virtual IP through ARP announcements, preserving ongoing connections and templates. This combination of persistence and health checks ensures high availability, with real servers reintegrated automatically upon recovery.[19][24]
Configuration and Management
Tools
The primary tool for managing Linux Virtual Server (LVS) configurations isipvsadm, a command-line utility that allows administrators to add, edit, delete, and inspect virtual servers, real servers, and services within the IP Virtual Server (IPVS) kernel table.[25] It supports operations such as specifying scheduling algorithms, persistence timeouts, and health check parameters, but requires root privileges to execute due to its direct interaction with kernel structures.[25] For example, commands like ipvsadm -A -t <VIP>:<port> -s rr add a virtual server using round-robin scheduling, while ipvsadm -l lists the current table. (Note: source is the official tarball documentation.)
To persist LVS configurations across system reboots, ipvsadm-save and ipvsadm-restore are used; the former dumps the IPVS table to standard output in a portable format, which can be redirected to a file, and the latter reloads it from standard input during boot. These tools ensure that virtual server setups survive kernel restarts without manual reconfiguration, typically integrated into init scripts or service files.[26] For instance, running ipvsadm-save > /etc/ipvsadm.rules followed by ipvsadm-restore < /etc/ipvsadm.rules restores the state.
Runtime monitoring of LVS is facilitated through kernel-provided interfaces and standard networking utilities. The /proc/net/ip_vs file exposes real-time statistics, including the IPVS version, connection counts, and scheduler details, allowing administrators to verify operational status without additional software.[27] For viewing active connections routed through IPVS, tools like ss or netstat provide socket-level insights, such as displaying TCP/UDP flows to virtual IPs when invoked with options like ss -tnl or netstat -tn.[25]
Auxiliary software enhances LVS capabilities for high availability and advanced routing. Keepalived implements VRRP for failover between multiple directors and includes built-in health checks for real servers, automatically updating the IPVS table upon failures to maintain cluster uptime.[19] It configures LVS via its keepalived.conf file, supporting modes like NAT or DR while providing robust monitoring frameworks. For advanced packet marking, LVS supports netfilter (NFMark) integration, where firewall rules set skb marks to route traffic to specific virtual services without relying on IP/port tuples, enabling policy-based load balancing in complex environments.[28]
In modern Linux distributions with kernel 5.x and later, LVS management tools like Keepalived are integrated with systemd for streamlined service lifecycle control, allowing commands such as systemctl enable --now keepalived to handle startup, dependencies, and restarts automatically.[29]
Setup Process
Deploying a Linux Virtual Server (LVS) cluster requires a Linux kernel with IP Virtual Server (IPVS) support enabled, typically available in distributions like Red Hat Enterprise Linux or Ubuntu. The IPVS kernel module must be loaded on the director (load balancer) node using the commandmodprobe ip_vs, which activates the necessary components for load balancing. Network configuration involves assigning a Virtual IP (VIP) to the director for external client access and ensuring Director IP (DIP) and Real Server IPs (RIPs) are properly set up; in NAT mode, real servers must route their responses back through the director, often by setting the director's internal IP as their default gateway. IP forwarding must also be enabled on the director with echo 1 > /proc/sys/net/ipv4/ip_forward to allow packet routing between networks.[30][25][31]
Basic setup begins on the director after installing the ipvsadm utility, which manages the IPVS table. Create a virtual server entry for a TCP service using ipvsadm -A -t <VIP>:<port> -s <scheduler> -m, where -m specifies NAT (masquerading) mode and <scheduler> selects an algorithm like round-robin (rr). For example, to balance HTTP traffic: ipvsadm -A -t 192.168.1.10:80 -s rr -m. Add real servers to this virtual service with ipvsadm -a -t <VIP>:<port> -r <RIP>:<port> -m; for instance, ipvsadm -a -t 192.168.1.10:80 -r 192.168.1.100:80 -m and ipvsadm -a -t 192.168.1.10:80 -r 192.168.1.101:80 -m. These commands populate the kernel's forwarding table, directing incoming packets to real servers while rewriting source addresses for return traffic. To persist the configuration across reboots, save the table with ipvsadm-save > /etc/ipvsadm.rules and restore it via a startup script.[25][31]
Persistence, or session stickiness, ensures that subsequent requests from the same client IP are routed to the same real server, useful for stateful applications. Enable it when creating the virtual server by adding the -p <timeout> option, where timeout is in seconds (default 300): ipvsadm -A -t 192.168.1.10:80 -s rr -p 3600 -m. This creates a hash table entry based on client IP, source port, VIP, and destination port, directing matching traffic consistently until the timeout expires. Persistence can be applied selectively or globally but requires careful tuning to avoid overloading individual real servers.[25]
Testing the setup involves verifying the IPVS table and simulating client traffic. List the current virtual services and real servers numerically with ipvsadm -L -n to confirm entries without resolving hostnames; active connections can be viewed with ipvsadm -L -c. Send test requests from a client to the VIP (e.g., using curl http://192.168.1.10), observing distribution across real servers via connection counts in the listing. Monitor system logs (/var/log/messages or dmesg) for errors like packet drops or module loading issues, and use ipvsadm -Z to zero counters before retesting for accurate metrics. Successful setup shows balanced requests without failures.[25]
For scaling, weights can be assigned to real servers to influence load distribution proportionally in weighted schedulers like wrr. Add or edit a real server with the -w <weight> option (0-65535, where 0 quiesces the server): ipvsadm -a -t 192.168.1.10:80 -r 192.168.1.100:80 -m -w 2. Health monitoring integrates connection thresholds using -x <upper> (maximum connections before quiescing) and -y <lower> (minimum before reactivation), e.g., ipvsadm -a -t 192.168.1.10:80 -r 192.168.1.100:80 -m -x 100 -y 50; for advanced checks, external scripts (e.g., via ldirectord) can probe real server responsiveness and update the table dynamically. These features allow dynamic adjustment as the cluster grows.[25]
Integrations and Use Cases
Integration with Kubernetes
Linux Virtual Server (LVS), through its IP Virtual Server (IPVS) subsystem, integrates with Kubernetes by serving as the backend for the kube-proxy in IPVS mode, which handles load balancing for Kubernetes Services by redirecting traffic from cluster IPs to pod endpoints using kernel-level IPVS mechanisms.[32] This mode leverages the netfilter hooks and IPVS hash tables to efficiently distribute traffic, replacing the default iptables mode for improved performance in containerized environments.[32] To enable IPVS mode, administrators configure kube-proxy with themode: ipvs setting in its configuration file or via the --proxy-mode=ipvs flag, ensuring the necessary kernel modules—such as ip_vs, ip_vs_rr, ip_vs_wrr, ip_vs_sh, and ip_vs_mh—are loaded on Linux nodes.[33] Additional IPVS-specific options include selecting a scheduler like round-robin (rr) or weighted round-robin (wrr) via the ipvs.scheduler field, and tuning timeouts for TCP sessions (e.g., tcpTimeout and tcpFinTimeout) to optimize connection handling.[33] This setup requires kernel support for IPVS, typically available in Linux distributions used for Kubernetes nodes, and is particularly configured during cluster initialization or via tools like kubeadm.
The primary benefits of IPVS mode manifest in large-scale Kubernetes clusters, where it reduces network latency and CPU overhead compared to iptables, especially when managing over 1,000 Services or endpoints, by utilizing efficient hash-based lookups instead of linear rule chains.[30] For instance, in clusters exceeding 1,000 pods, IPVS mode addresses synchronization delays that can occur with iptables, enabling smoother scaling for high-traffic workloads while supporting advanced scheduling algorithms such as least connections (lc), source hashing (sh), and Maglev hashing (mh) for consistent traffic distribution.[32]
As of November 2025, IPVS mode remains a viable and supported option in recent Kubernetes versions, including up to 1.36, though it has been de-emphasized in documentation in favor of emerging alternatives like the nftables proxy mode, which became generally available in Kubernetes 1.33 and offers superior packet processing efficiency without the need for a virtual interface like kube-ipvs0.[34] eBPF-based proxies are also gaining traction for performance-critical setups, potentially signaling future deprecation risks for IPVS, but it continues to suit environments prioritizing kernel-native load balancing.[34] Limitations include dependency on Linux kernel IPVS availability and the absence of built-in support for non-TCP/UDP protocols without additional configuration.[32]