Fact-checked by Grok 2 weeks ago

Linux Virtual Server

The Linux Virtual Server (LVS) is a and open-source load balancing software project that integrates into the to enable the construction of highly scalable and highly available server clusters, allowing multiple real servers to operate as a single server for distributing network traffic such as web, mail, or VoIP services. LVS achieves this through its core component, the IP Virtual Server (IPVS) subsystem, which uses kernel-level packet processing to dispatch incoming requests to backend real servers based on configurable scheduling algorithms, while maintaining transparency to clients. The project supports three primary load balancing methods—VS/NAT (), VS/TUN (IP tunneling), and VS/DR (Direct Routing)—each optimized for different network topologies and scalability needs, with VS/DR offering the highest performance by avoiding packet rewriting overhead. Initiated in 1998 by Wensong Zhang at the in , LVS was designed to address the limitations of single-server architectures in handling growing internet workloads, evolving from early prototypes into a mature module adopted in mainline distributions. Key milestones include the addition of support in 2.6.28 (2008) and advanced features like FULLNAT for bidirectional NAT and SYNPROXY for DDoS protection in kernel 2.6.32 (2012). The project is licensed under the GNU General Public License (GPL) and includes user-space tools like ipvsadm for configuring virtual services. LVS clusters emphasize through mechanisms like health checks on real servers and automatic reconfiguration, enabling seamless scaling by adding or removing nodes without service interruption. Notable for its efficiency, LVS can handle over 1 Gbps throughput in tunneling mode and supports hundreds of backend servers, making it a foundational for large-scale deployments in environments, including integrations with tools like Keepalived for VRRP-based . Its architecture separates the load balancer (often called LinuxDirector) from the real servers, which can be connected via or , ensuring reliability and performance in diverse setups.

Introduction

Definition and Purpose

The (LVS) is a free, open-source load balancing software integrated into the , enabling the distribution of IP traffic across multiple real servers via a shared . This architecture allows a of commodity servers to function as a single, unified virtual server, providing transparent scalability without altering client-side configurations. The primary purpose of LVS is to construct highly scalable and highly available server clusters for demanding network services, including web hosting, , and VoIP applications. By efficiently routing incoming requests using methods such as (NAT), IP Tunneling (TUN), and Direct Routing (DR), LVS supports environments capable of handling high volumes of requests, ensuring reliability and performance under heavy loads through the addition of real servers as needed. At its core, LVS employs a director-based load balancing model, where a front-end processes client connections and forwards them to back-end real servers, all while maintaining session transparency and without requiring modifications to application code. This approach leverages the kernel's networking stack to achieve low-latency, high-throughput distribution, making it suitable for large-scale deployments. LVS has been licensed under the GNU General Public License (GPL) version 2 since its inception as an open-source project initiated by Wensong Zhang in 1998.

Key Features

Linux Virtual Server (LVS) provides robust support for core transport protocols including , , and SCTP, enabling efficient load balancing for a wide range of network services. Additionally, it handles specialized protocols such as FTP through integration with Netfilter hooks in the , allowing seamless management of connection-oriented traffic without disrupting standard operations. A key advantage of LVS is its high , demonstrated by its ability to support millions of concurrent connections on commodity with sufficient memory, making it suitable for large-scale deployments. This scalability stems from its cluster-based architecture, which distributes load across multiple real servers while minimizing bottlenecks at the director node. LVS operates with complete to clients and backend servers, requiring no alterations to requests or server configurations; it employs a virtual IP (VIP) to present the cluster as a single entity. Packet processing occurs directly at the kernel level via the IP Virtual Server (IPVS) module, ensuring low latency and reduced CPU overhead compared to user-space alternatives. As an open-source solution under the GNU General Public License, LVS facilitates customization and integration with other Linux tools, such as iptables and keepalived, to enhance flexibility in diverse environments.

History and Development

Origins

The Linux Virtual Server (LVS) project was initiated by Wensong Zhang in May 1998 while he was a researcher at the National Laboratory for Parallel & Distributed Processing in , , . This development occurred amid the rapid expansion of the in the late , where the demand for scalable and highly available web services outpaced the capabilities of single servers. Zhang's work was motivated by the need to leverage inexpensive commodity hardware in clusters to provide reliable network services, drawing inspiration from ongoing research in parallel and distributed computing systems. The project sought to create an open-source framework for load balancing that could support high-performance virtual servers without relying on proprietary hardware solutions. The initial implementation of LVS involved kernel-level extensions to the TCP/IP stack to enable efficient layer-4 switching, complemented by user-space tools for managing load distribution. The first public release in introduced prototypes that were quickly adopted in academic and research settings, including early deployments for high-traffic sites such as www.linux.com and sourceforge.net. Subsequent efforts focused on refining the system for broader integration, culminating in its inclusion in the Linux kernel.

Milestones and Releases

A stable version of the IP Virtual Server (IPVS) patch (1.0.8), the core load balancing component of Linux Virtual Server (LVS), was released for the Linux kernel 2.2 series on May 14, 2001, providing support for high-performance transport-layer load balancing without requiring additional user-space modules. This enabled LVS to scale internet services efficiently on commodity hardware, marking a shift from earlier prototypes. The first mainline integration of IPVS into the Linux kernel occurred in version 2.4.28 in November 2004, further stabilized with bug fixes in IPVS version 1.0.12. In December 2004, IPVS version 1.2.1 became the stable release integrated into kernel 2.6.10, introducing enhancements to the Netfilter module for improved packet processing and persistence mechanisms to maintain session affinity across connections. Around the same period, development of KTCPVS (Kernel TCP Virtual Server), an extension for application-level (Layer-7) load balancing within the kernel, began in May 2000, with version 0.0.17 released on December 8, 2004, featuring stalled connection collection and tool fixes; however, active development ceased after 2004, with the final release (0.0.18) on December 18, 2004. Adoption of LVS grew in the mid-2000s among large-scale deployments, including the , which has utilized it for load balancing incoming requests on commodity servers as part of its core infrastructure. IPv6 support was added to IPVS in kernel 2.6.28-rc3 on November 2, 2008, extending compatibility to next-generation networks and enabling balanced traffic distribution. In kernel 2.6.32 (2012), IPVS gained advanced features including FULLNAT for bidirectional and SYNPROXY for DDoS protection. As of 2025, IPVS remains an integral part of the through the 6.x series, with ongoing maintenance emphasizing stability rather than major overhauls; minor optimizations in kernels 5.10 and later (released from December 2020 onward) leverage broader multi-core and networking enhancements for improved scalability, though no significant new IPVS-specific features have emerged post-2020. LVS's IPVS continues to underpin modern orchestration, such as ' in-cluster load balancing since its general availability in 2018.

Architecture

Core Components

The Linux Virtual Server (LVS) architecture revolves around a modular structure that enables scalable load balancing within the ecosystem. At its core, the system comprises a director , real servers, and a virtual server, which collectively provide a transparent clustering for handling high-volume traffic. This forms a three-tier architecture, with the third tier being shared that ensures consistent across the real servers, such as through distributed file systems (e.g., NFS, GFS) or for dynamic content. The director node serves as the front-end load balancer, positioned as the single entry point for client requests in the . It receives incoming connections and distributes them across back-end resources using kernel-level processing to ensure low latency and high throughput, supporting configurations that can manage millions of concurrent connections. The director integrates with modules, such as Netfilter, for efficient packet interception, rewriting, and routing, allowing seamless forwarding without user-space involvement. Real servers form the back-end of nodes that perform the actual service processing, such as web hosting or database operations. These servers can be geographically dispersed and connected via standard networks like Ethernet, with the directing to them based on load distribution rules; is achieved by adding nodes, where performance scales linearly up to hundreds in certain setups. The virtual server acts as a logical entity that presents the entire cluster as a single, unified service accessible via a shared virtual IP (VIP) address. Clients interact with this VIP as if it were a standalone high-performance server, with the underlying complexity of load distribution hidden from view. This abstraction is facilitated by the IP Virtual Server (IPVS) kernel module, which handles the transport-layer balancing. For enhanced reliability, optional components like Keepalived or Ultramonkey can be integrated to provide and capabilities. Keepalived, for instance, implements VRRP for and multi-layer health checks, ensuring seamless transition if a primary fails, while Ultramonkey supports service monitoring and reconfiguration in LVS clusters.

IP Virtual Server (IPVS)

The IP Virtual Server (IPVS) serves as the core kernel-based component of Linux Virtual Server (LVS), implementing transport-layer load balancing directly within the Linux kernel to distribute TCP, UDP, and SCTP traffic across multiple backend servers. As a Netfilter module, IPVS operates at Layer 4 of the OSI model, enabling efficient packet forwarding without the overhead of user-space processing. It maintains a virtual server table that maps incoming requests addressed to a virtual IP (VIP) to one of several real server IPs (RIPs), supporting scalable cluster architectures for high-availability services. IPVS utilizes s for rapid connection lookup and forwarding decisions, achieving constant-time O(1) complexity for packet processing even under high loads. The connection , keyed primarily on client to reduce collisions, tracks active sessions with entries typically sized at 128 bytes, allowing the system to handle up to approximately 2 million concurrent connections on a with 256 MB of . This ensures low-latency forwarding while minimizing overhead, making IPVS suitable for environments with massive connection volumes. IPVS supports three primary operation modes to accommodate diverse network topologies: (NAT), (DR), and IP Tunneling (TUN). In NAT mode, the load balancer rewrites both source and destination addresses of packets, routing responses back through itself, which suits small clusters but limits scalability to around 20 servers due to translation bottlenecks. mode modifies only the for direct packet delivery to real servers on the same , enabling larger clusters of up to 100 servers without address rewriting overhead, provided non-ARP loopback is configured to avoid broadcast storms. TUN mode encapsulates packets in IP tunnels for forwarding to remote real servers, supporting geographically distributed setups across up to 100 nodes while preserving high throughput. Packets destined for the VIP are intercepted early in the network stack at the PRE_ROUTING of the Netfilter framework, where IPVS examines the destination IP to determine if it matches a virtual service. For new connections, typically initiated by SYN packets, IPVS applies a to select a real server and creates a new entry in the , forwarding the packet to the corresponding RIP with mode-specific transformations (e.g., address rewriting in or encapsulation in TUN). Established connections bypass scheduling, with subsequent packets transparently routed to the same real server via the existing entry, ensuring session persistence without additional overhead. This flow supports both state tracking and datagrams, with configurable timeouts (e.g., 300 seconds for ) to manage resource cleanup.

Load Balancing Methods

Scheduling Algorithms

Linux Virtual Server (LVS) utilizes a suite of scheduling algorithms to efficiently distribute incoming network connections across real servers, enabling scalable load balancing tailored to diverse application needs. These algorithms are implemented within the IP Virtual Server (IPVS) and can be categorized as static (deterministic) or dynamic. Static algorithms rely on fixed rules independent of current load, providing predictable distribution, while dynamic algorithms adapt based on metrics like active connections or server weights to achieve better balance. The original LVS implementation introduced four core algorithms, with additional ones developed subsequently to address specific scenarios such as locality-aware caching or minimal queuing. Round-Robin (RR) is a static that sequentially assigns connections to real servers in a , assuming all servers have equal processing capacity. It operates at the granularity of individual connections, offering finer control than methods like . This approach ensures even distribution without requiring server state information, making it suitable for homogeneous clusters. Weighted Round-Robin (WRR) extends by incorporating server weights that reflect relative capacities, directing more connections to higher-weighted servers. For instance, servers with weights of 4, 3, and 2 would receive connections in a pattern like A-A-B-A-B-C, proportional to their assigned values (default weight is 1). This static method enhances fairness in heterogeneous environments without dynamic monitoring. Least is a dynamic that routes new connections to the real server with the fewest active connections, aiming to equalize load based on current usage. It performs well in scenarios with persistent connections but can be affected by TIME_WAIT states, which delay connection counts for up to two minutes. LC assumes uniform server capacities and adjusts in for balanced distribution. Weighted Least Connection (WLC) builds on by factoring in server weights to normalize load distribution, sending to the where the ratio of active to weight is minimized. This dynamic approach better handles varying capacities, such as when a more powerful is assigned a higher weight to process proportionally more traffic. It is widely used for its adaptability in production clusters. Locality-Based Least Connection (LBLC) is a dynamic designed for locality-sensitive applications, such as caches, where it directs connections for a given destination to the least-connected server within a defined set. If that server becomes overloaded, it shifts to the next least-connected option, promoting content locality while balancing load. The Weighted LBLC variant incorporates server weights into the LBLC process, adjusting the least-connection selection to account for capacity differences and further optimizing for replicated setups. Destination Hashing (DH) employs a static keyed on destination addresses to consistently map connections to the same real , ensuring deterministic for session without state tracking. It is particularly useful for applications requiring sticky sessions based on client targets. Source Hashing (SH) similarly uses a static on source addresses to assign connections predictably to , maintaining locality for clients from the same IP and supporting in NAT environments. Shortest Expected Delay (SED) dynamically selects the offering the minimal expected response time, calculated by considering current and weights to predict delay as (active + 1) divided by server weight. This prioritizes low-latency paths, making it effective for time-sensitive services. Never Queue (NQ) first attempts to route to idle servers for zero delay; if none are available, it falls back to SED to minimize queuing. This dynamic algorithm is optimized for scenarios where avoiding wait times is critical, such as applications. Algorithm selection in LVS is performed using the ipvsadm with the -s to specify the scheduler method, allowing administrators to choose between deterministic and dynamic behaviors based on workload requirements. These schedulers integrate with persistence modes for session , as detailed in related sections.

Persistence and Health Checks

In Linux Virtual Server (LVS), persistence mechanisms ensure that subsequent connections from the same client are routed to the same real server, which is essential for stateful applications such as sessions or FTP transfers that require session . This is achieved through a persistent port feature that creates a upon the client's initial access to the virtual service; the , formatted as <client IP, 0, virtual IP, virtual port, real server IP, real server port>, stores the mapping in a to direct all related traffic consistently. Persistence can operate on a timeout basis, where templates expire after a configurable duration—defaulting to seconds—or when all associated terminate, preventing indefinite binding and allowing load redistribution. For finer , administrators apply a persistent netmask to the client for granularity, such as 255.255.255.0 to group clients behind a as a single entity, ensuring balanced yet sticky across subnets. Source IP persistence relies on hashing the client IP to select and fix a real , while advanced setups use marks (fwmarks) to tag packets and enforce custom persistence rules based on additional criteria like or . Connection in persistent configurations handle non-SYN packets by referencing the initial , maintaining even for ongoing sessions without requiring full handshakes, which supports protocols like FTP that involve multiple ports. For example, in a VS/ setup for FTP, a might a client to a real across control (port 21) and data (port 20) connections, configurable via tools like ipvsadm with a persistent timeout of 540 seconds. Health checks in LVS monitor the availability of real servers to maintain cluster reliability, dynamically removing unhealthy nodes from the load-balancing pool to prevent traffic loss. External daemons such as ldirectord perform periodic probes—typically HTTP requests to a known —and instruct ipvsadm to update the IPVS table by quiescing or removing failed real servers if responses fail or timeout. Integrated tools like Keepalived extend this with a multi-layer health-checking , supporting checks with configurable timeouts (e.g., 3 seconds) to detect failures and adjust server weights or exclude nodes accordingly. For failover, LVS integrates VRRP via Keepalived to enable seamless transitions without service interruption; if the primary load balancer fails health checks, the backup assumes the virtual IP through announcements, preserving ongoing connections and templates. This combination of and health checks ensures , with real servers reintegrated automatically upon recovery.

Configuration and Management

Tools

The primary tool for managing Linux Virtual Server (LVS) configurations is ipvsadm, a command-line utility that allows administrators to add, edit, delete, and inspect virtual servers, real servers, and services within the IP Virtual Server (IPVS) table. It supports operations such as specifying scheduling algorithms, persistence timeouts, and health check parameters, but requires privileges to execute due to its direct interaction with kernel structures. For example, commands like ipvsadm -A -t <VIP>:<port> -s rr add a virtual server using , while ipvsadm -l lists the current table. (Note: source is the official tarball documentation.) To persist LVS configurations across system reboots, ipvsadm-save and ipvsadm-restore are used; the former dumps the IPVS table to standard output in a portable , which can be redirected to a file, and the latter reloads it from standard input during . These tools ensure that virtual server setups survive restarts without manual reconfiguration, typically integrated into scripts or files. For instance, running ipvsadm-save > /etc/ipvsadm.rules followed by ipvsadm-restore < /etc/ipvsadm.rules restores the state. Runtime monitoring of LVS is facilitated through kernel-provided interfaces and standard networking utilities. The /proc/net/ip_vs file exposes real-time statistics, including the IPVS version, connection counts, and scheduler details, allowing administrators to verify operational status without additional software. For viewing active connections routed through IPVS, tools like ss or netstat provide socket-level insights, such as displaying TCP/UDP flows to virtual IPs when invoked with options like ss -tnl or netstat -tn. Auxiliary software enhances LVS capabilities for high availability and advanced routing. Keepalived implements VRRP for failover between multiple directors and includes built-in health checks for real servers, automatically updating the IPVS table upon failures to maintain cluster uptime. It configures LVS via its keepalived.conf file, supporting modes like NAT or DR while providing robust monitoring frameworks. For advanced packet marking, LVS supports netfilter (NFMark) integration, where firewall rules set skb marks to route traffic to specific virtual services without relying on IP/port tuples, enabling policy-based load balancing in complex environments. In modern Linux distributions with kernel 5.x and later, LVS management tools like Keepalived are integrated with systemd for streamlined service lifecycle control, allowing commands such as systemctl enable --now keepalived to handle startup, dependencies, and restarts automatically.

Setup Process

Deploying a Linux Virtual Server (LVS) cluster requires a Linux kernel with IP Virtual Server (IPVS) support enabled, typically available in distributions like Red Hat Enterprise Linux or Ubuntu. The IPVS kernel module must be loaded on the director (load balancer) node using the command modprobe ip_vs, which activates the necessary components for load balancing. Network configuration involves assigning a Virtual IP (VIP) to the director for external client access and ensuring Director IP (DIP) and Real Server IPs (RIPs) are properly set up; in NAT mode, real servers must route their responses back through the director, often by setting the director's internal IP as their default gateway. IP forwarding must also be enabled on the director with echo 1 > /proc/sys/net/ipv4/ip_forward to allow packet routing between networks. Basic setup begins on the director after installing the ipvsadm utility, which manages the IPVS table. Create a virtual server entry for a TCP service using ipvsadm -A -t <VIP>:<port> -s <scheduler> -m, where -m specifies (masquerading) mode and <scheduler> selects an algorithm like round-robin (rr). For example, to balance HTTP traffic: ipvsadm -A -t 192.168.1.10:80 -s rr -m. Add real servers to this virtual service with ipvsadm -a -t <VIP>:<port> -r <RIP>:<port> -m; for instance, ipvsadm -a -t 192.168.1.10:80 -r 192.168.1.100:80 -m and ipvsadm -a -t 192.168.1.10:80 -r 192.168.1.101:80 -m. These commands populate the kernel's forwarding table, directing incoming packets to real servers while rewriting source addresses for return traffic. To persist the configuration across reboots, save the table with ipvsadm-save > /etc/ipvsadm.rules and restore it via a startup script. Persistence, or session stickiness, ensures that subsequent requests from the same client are routed to the same real server, useful for stateful applications. Enable it when creating the virtual server by adding the -p <timeout> option, where timeout is in seconds (default 300): ipvsadm -A -t 192.168.1.10:80 -s rr -p 3600 -m. This creates a entry based on client IP, , VIP, and destination , directing matching traffic consistently until the timeout expires. Persistence can be applied selectively or globally but requires careful tuning to avoid overloading individual real servers. Testing the setup involves verifying the IPVS table and simulating client traffic. List the current virtual services and real servers numerically with ipvsadm -L -n to confirm entries without resolving hostnames; active connections can be viewed with ipvsadm -L -c. Send test requests from a client to the VIP (e.g., using curl http://192.168.1.10), observing distribution across real servers via connection counts in the listing. Monitor system logs (/var/log/messages or dmesg) for errors like packet drops or module loading issues, and use ipvsadm -Z to zero counters before retesting for accurate metrics. Successful setup shows balanced requests without failures. For scaling, weights can be assigned to real servers to influence load distribution proportionally in weighted schedulers like wrr. Add or edit a real server with the -w <weight> option (0-65535, where 0 quiesces the server): ipvsadm -a -t 192.168.1.10:80 -r 192.168.1.100:80 -m -w 2. Health monitoring integrates connection thresholds using -x <upper> (maximum connections before quiescing) and -y <lower> (minimum before reactivation), e.g., ipvsadm -a -t 192.168.1.10:80 -r 192.168.1.100:80 -m -x 100 -y 50; for advanced checks, external scripts (e.g., via ldirectord) can probe real server responsiveness and update the table dynamically. These features allow dynamic adjustment as the cluster grows.

Integrations and Use Cases

Integration with

Linux Virtual Server (LVS), through its IP Virtual Server (IPVS) subsystem, integrates with by serving as the backend for the kube-proxy in IPVS mode, which handles load balancing for Services by redirecting traffic from cluster IPs to pod endpoints using kernel-level IPVS mechanisms. This mode leverages the netfilter hooks and IPVS hash tables to efficiently distribute traffic, replacing the default iptables mode for improved performance in containerized environments. To enable IPVS mode, administrators configure kube-proxy with the mode: ipvs setting in its configuration file or via the --proxy-mode=ipvs flag, ensuring the necessary kernel modules—such as ip_vs, ip_vs_rr, ip_vs_wrr, ip_vs_sh, and ip_vs_mh—are loaded on nodes. Additional IPVS-specific options include selecting a scheduler like (rr) or weighted (wrr) via the ipvs.scheduler field, and tuning timeouts for sessions (e.g., tcpTimeout and tcpFinTimeout) to optimize connection handling. This setup requires support for IPVS, typically available in Linux distributions used for nodes, and is particularly configured during cluster initialization or via tools like kubeadm. The primary benefits of IPVS mode manifest in large-scale clusters, where it reduces network latency and CPU overhead compared to , especially when managing over 1,000 Services or endpoints, by utilizing efficient hash-based lookups instead of linear rule chains. For instance, in clusters exceeding 1,000 pods, IPVS mode addresses synchronization delays that can occur with , enabling smoother scaling for high-traffic workloads while supporting advanced scheduling algorithms such as least connections (lc), source hashing (sh), and hashing (mh) for consistent traffic distribution. As of November 2025, IPVS mode remains a viable and supported option in recent Kubernetes versions, including up to 1.36, though it has been de-emphasized in documentation in favor of emerging alternatives like the nftables proxy mode, which became generally available in Kubernetes 1.33 and offers superior packet processing efficiency without the need for a virtual interface like kube-ipvs0. eBPF-based proxies are also gaining traction for performance-critical setups, potentially signaling future deprecation risks for IPVS, but it continues to suit environments prioritizing kernel-native load balancing. Limitations include dependency on Linux kernel IPVS availability and the absence of built-in support for non-TCP/UDP protocols without additional configuration.

Real-World Examples

The has employed Linux Virtual Server (LVS) since the early 2010s to manage load balancing for and other Wikimedia projects, distributing incoming requests across commodity servers in a high-traffic environment. This setup utilizes Direct Routing (DR) mode, allowing backend servers to respond directly to clients, which enhances efficiency for the foundation's global network handling billions of monthly pageviews. LVS operates as a Layer 4 load balancer, integrating with tools like for caching to support petabyte-scale data delivery without significant bottlenecks. In applications, LVS in (NAT) mode provides a robust solution for web clusters, as illustrated in deployments balancing traffic across multiple real servers for online merchandise ordering. For instance, a typical configuration with 10 real servers can route HTTP requests to backend application servers, achieving in production clusters by minimizing single points of failure through health checks and mechanisms. Voice over IP (VoIP) services leverage LVS in IP Tunneling (TUN) mode to distribute calls across geographically dispersed servers, encapsulating packets to overcome network constraints in wide-area setups. Persistence features ensure session stickiness for ongoing call connections, directing subsequent UDP traffic from the same client IP to the initial real server for the duration of the session, typically configured for 300 seconds or more to maintain call integrity. Alibaba Cloud incorporates LVS within its Server Load Balancer (SLB) for Layer 4 load balancing in cloud infrastructure, supporting high-concurrency scenarios such as peaks and real-time services. This implementation, combined with Keepalived for VRRP-based redundancy, enables seamless traffic distribution across global data centers. Production deployments of LVS often address failover challenges using Keepalived to synchronize connection states and virtual IP addresses, ensuring minimal during director failures. In a case study from Picsart's infrastructure, this approach provided stateful , with zero connection drops in load-balanced / services, demonstrating reliability in multi-node clusters from 2022 onward. Similar metrics in recent enterprise setups highlight LVS's role in maintaining service continuity amid hardware or network disruptions.

Terminology

Key Terms

Linux Virtual Server (LVS) is an open-source project that implements kernel-based clustering for high-performance load balancing across multiple servers, enabling scalable and highly available network services. The director refers to the active load-balancing host, typically an LVS router, that intercepts incoming client requests, distributes them to backend servers based on configured algorithms, and monitors the health of those servers to ensure service continuity. A in LVS consists of a group of real servers that collectively provide the same , appearing to clients as a single, unified virtual server managed by one or more for load distribution and . Key acronyms used in LVS include VIP (Virtual IP), the publicly accessible shared by the cluster to represent the virtual service; RIP (Real server IP), the private assigned to each backend real server; DIP (Director IP), the IP address of the director used for internal communication with real servers; and CIP (Client IP), the source of the requesting client. A virtual service denotes the logical defined by a VIP and port combination, where client requests are directed and then forwarded to appropriate real servers within the . , also known as session affinity, is a mechanism in LVS that ensures all connections from a specific client (identified by ) are routed to the same real server for a configurable timeout period, supporting stateful applications like secure sessions or shopping carts.

IP Address Roles

In Linux Virtual Server (LVS) configurations, distinct IP addresses play specific roles to enable load balancing and high availability across cluster nodes. The Virtual IP address (VIP) serves as the shared, publicly routable entry point advertised by the director, allowing clients to access the virtual service as if it were a single entity. The Director IP address (DIP) represents the actual IP of the load-balancing director itself, facilitating management tasks, inter-cluster communication, and traffic forwarding decisions within the kernel's IPVS module. Each Real IP address (RIP) is uniquely assigned to an individual real server in the cluster, serving as the target for incoming requests forwarded by the director to distribute workload. The Client IP address (CIP) identifies the originating client, which is crucial for features like persistence hashing to ensure session stickiness and for logging or monitoring purposes. These IP addresses interact through defined flows depending on the forwarding mode, optimizing performance and scalability. In Network Address Translation (NAT) mode, incoming requests from the client (source: CIP, destination: VIP) reach the director, which rewrites the destination to the selected RIP while preserving the CIP as the source; responses from the real server (source: RIP, destination: CIP) are then routed back through the DIP to the client, ensuring symmetric translation. This mode centralizes traffic handling at the director but can introduce bottlenecks for high-throughput scenarios. In Direct Routing (DR) mode, the director forwards the original packet (source: CIP, destination: VIP) to the real server by changing only the MAC address to the RIP's, without altering IP headers; the real server, configured with the VIP on a loopback interface, responds directly (source: VIP, destination: CIP) to the client, bypassing the director and DIP for return traffic to minimize latency. These flows leverage the VIP for unified client access while isolating the DIP and RIP for internal efficiency, enabling LVS to scale horizontally across multiple real servers.

References

  1. [1]
    The Linux Virtual Server Project - Linux Server Cluster for Load ...
    The Linux Virtual Server is a highly scalable and highly available server built on a cluster of real servers, with the load balancer running on the Linux ...
  2. [2]
    LVS Introduction - Load Balancing Server Cluster
    The basic goal of the Linux Virtual Server Project is to build a high-performance and highly available server for Linux using clustering technology.
  3. [3]
    How virtual server works
    Virtual server is implemented in three ways. There are three IP load balancing techniques (packet forwarding methods) existing together in the LinuxDirector.
  4. [4]
    LVS Documentation - Linux Virtual Server
    Dec 5, 1998 · This is a collection of LVS documents, some of them are in progress, some are in Chinese. 1. Overview documents
  5. [5]
    [PDF] Linux Virtual Server for Scalable Network Services
    Linux Virtual Server for Scalable Network Services. Wensong Zhang. National Laboratory for Parallel & Distributed Processing. Changsha, Hunan 410073, China.
  6. [6]
    LVS Software - Advanced Load Balancing Solution
    May 28, 1998 · LVSM is the linux virtual server manager. It is a package which is designed to simplify creation and management of LVS based clusters. lvs ...Missing: v2 | Show results with:v2<|separator|>
  7. [7]
    Linux Virtual Server - Simple English Wikipedia, the free encyclopedia
    Linux Virtual Server. load balancing software. Page · Talk. Language; Loading ... It has started by Wensong Zhang in May 1998. It is subjected to the ...
  8. [8]
    ipvsadm - Linux Virtual Server administration - Ubuntu Manpage
    Supported features include three protocols (TCP, UDP and SCTP), three packet ... ports and protocols into a single virtual service. This is useful for ...Missing: Netfilter | Show results with:Netfilter
  9. [9]
    3.5.3. Creating Network Packet Filter Rules - Red Hat Documentation
    Below are rules which assign the same firewall mark, 21, to FTP traffic. For these rules to work properly, you must also use the VIRTUAL SERVER subsection ...Missing: Netfilter | Show results with:Netfilter
  10. [10]
    Alibaba Cloud Enhanced NAT Gateway Roll Out
    May 31, 2021 · ... 100 Gbps forwarding, 20 million concurrent connections, and 2.5 million ... Load Balancing - Linux Virtual Server (LVS) and Its Forwarding Modes.
  11. [11]
    Performance and Tuning - LVSKB - Linux Virtual Server
    Jan 26, 2012 · IPVS uses its own simple and fast connection tracking for performance reasons, instead of using netfilter connection tracking.
  12. [12]
    LinuxExpo Paper - Linux Virtual Server
    Thus the load balancer can handle huge amount of requests; it may schedule over 100 real servers and won't be the bottleneck of the system. Therefore, using IP ...
  13. [13]
    Linux Virtual Server Project Announcement
    The LVS handles connections from clients and passes them on the the real servers ... Million HTTP requests/200Gb per day. NetWalk is using the LVS for 1024 ...
  14. [14]
  15. [15]
    IPVS Software - Advanced Layer-4 Switching - Linux Virtual Server
    IPVS (IP Virtual Server) implements transport-layer load balancing inside the Linux kernel, so called Layer-4 switching.
  16. [16]
    News Archives - The Linux Virtual Server Project
    The KTCPVS version 0.0.17 was available on December 8, 2004. It includes the feature to collect stalled connections and tcpvsadm bugfixes. The IPVS Netfilter ...
  17. [17]
    KTCPVS Software - Application-Level Load Balancing
    KTCPVS stands for Kernel TCP Virtual Server. It implements application-level load balancing inside the Linux kernel, so called Layer-7 switching.
  18. [18]
    Wikimedia servers - Meta-Wiki
    We use Linux Virtual Server (LVS) on commodity servers to load balance incoming requests. ... The Wikimedia Foundation's servers are spread out in five colocation ...Missing: 2000s | Show results with:2000s
  19. [19]
  20. [20]
    General Architecture of LVS Clusters - Linux Virtual Server
    Oct 22, 2004 · Since KTCPVS is implemented inside the Linux kernel, the overhead of relaying data is minimal, so that it can still have high throughput.
  21. [21]
    High Availability - Linux Virtual Server
    Dec 5, 1998 · High availability can be achieved by detecting node or daemon failures and reconfiguring the system appropriately, so that the workload can be taken over by ...
  22. [22]
    The keepalived solution for LVS - Linux Virtual Server
    Keepalived provides a strong and robust health checking for LVS clusters. It implements a framework of health checking on multiple layers for server failover.
  23. [23]
    [PDF] Linux Kernel Load Balancing with IPVS for Modern Distributed ...
    IPVS is a kernel-space load balancer in Linux, using the netfilter framework for Layer 4 load balancing, processing packets without context switches.Missing: x | Show results with:x
  24. [24]
    Job Scheduling Algorithms in LVS - Linux Virtual Server
    Nov 20, 1998 · The weighted round-robin scheduling is designed to better handle servers with different processing capacities. Each server can be assigned a ...
  25. [25]
    Persistence Handling in LVS - Linux Virtual Server
    Dec 26, 1999 · Our current solution to connection affinity is to add persistent port feature in LVS. In the persistent port, when a client first accesses the service, ...
  26. [26]
    1.5. Persistence and Firewall Marks | Virtual Server Administration
    Persistence also allows the administrator to specify a subnet mask to apply to the client IP address test as a tool for controlling what addresses have a higher ...
  27. [27]
    1.8. Linux Virtual Server | Cluster Suite Overview
    Linux Virtual Server (LVS) is a set of integrated software components for balancing the IP load across a set of real servers.
  28. [28]
    Keepalived for Linux
    Keepalived man page · Version 2.3.4 · Keepalived Release Notes
  29. [29]
    ipvsadm(8): Virtual Server administration - Linux man page
    Ipvsadm(8) is used to set up, maintain or inspect the virtual server table in the Linux kernel. The Linux Virtual Server can be used to build scalable network ...
  30. [30]
    ipvsadm: the userspace portion of IPVS - GitHub
    ipvsadm-restore · ipvsadm ... ipvsadm is a utility to administer the IP virtual server services offered by the Linux kernel with IP virtual server support.
  31. [31]
    FAQ - LVSKB - Linux Virtual Server
    Nov 10, 2009 · LVS stands for Linux Virtual Server, which is a highly scalable and highly available server built on a cluster of real servers.Missing: v2 | Show results with:v2
  32. [32]
    22. LVS: Running a firewall on the director - Huihoo
    When the director receives a packet, it goes through PREROUTING where Routing decides that the packet is local (usually because of the presence of the VIP on a ...
  33. [33]
    Setting up a Linux cluster with Keepalived: Basic configuration
    Mar 25, 2020 · This article took you through the fundamentals of Keepalived installation and configuration. You learned how to install Keepalived through the package manager.
  34. [34]
    Running kube-proxy in IPVS Mode - Amazon EKS
    You can run the following command to load these modules on a machine that is already running. sudo modprobe ip_vs sudo modprobe ip_vs_rr sudo modprobe ip_vs_wrr ...
  35. [35]
    Virtual Server via NAT
    May 28, 1998 · At last, change to the ipvsadm source and type "make install" to install ipvsadm into your system directory.Missing: setup | Show results with:setup
  36. [36]
    Virtual IPs and Service Proxies - Kubernetes
    Sep 19, 2025 · The IPVS proxy mode is based on netfilter hook function that is similar to iptables mode, but uses a hash table as the underlying data ...
  37. [37]
    kube-proxy Configuration (v1alpha1) - Kubernetes
    Apr 24, 2025 · KubeProxyConfiguration contains everything necessary to configure the Kubernetes proxy server, including client connection, logging, and ...Missing: strictTCP | Show results with:strictTCP
  38. [38]
    NFTables mode for kube-proxy - Kubernetes
    Feb 28, 2025 · The future of the IPVS mode of kube-proxy is less certain: its main advantage over iptables was that it was faster, but certain aspects of the ...Nftables Mode For Kube-Proxy · Why Nftables? Part 1: Data... · Why Nftables? Part 2...Missing: benefits | Show results with:benefits
  39. [39]
    LVS - Wikitech - Wikimedia
    LVS, or Linux Virtual Server, is used at Wikimedia Foundation as a high-traffic Layer 4 load balancer.
  40. [40]
    Appendix A. Using LVS with Red Hat Cluster
    The configuration in Figure A.1, “LVS with a Red Hat Cluster” represents an e-commerce site used for online merchandise ordering through a URL.
  41. [41]
    Load Balancing - Linux Virtual Server (LVS) and Its Forwarding Modes
    Jan 9, 2020 · This article demonstrates how different modes of Linux Virtual Server (LVS) implement load balancing and outlines the advantages and disadvantages of each ...Missing: NFMark | Show results with:NFMark
  42. [42]
    1.8.4. Persistence and Firewall Marks | Cluster Suite Overview
    When enabled, persistence acts like a timer. When a client connects to a service, LVS remembers the last connection for a specified period of time. If that same ...
  43. [43]
    [PDF] Alibaba Cloud
    Feb 1, 2021 · Layer 4 service uses an optimized and customized version of the open source software Linux Virtual Server. (LVS) and Keepalived to achieve load ...
  44. [44]
    Stateful High Availability and Load Balancing using Keepalived with ...
    Dec 12, 2023 · LVS TCP state synchronization is an indispensable feature for environments requiring high availability and seamless failover capabilities.
  45. [45]
    How GOP managed LVS-NAT/LVS-DR/LVS-TUN with Keepalived
    Sep 10, 2019 · This article will introduce how GOP managed LVS-NAT/LVS-DR/LVS-TUN with Keepalived. GOP used to choose LVS-NAT but the performance is very low, then GOP ...
  46. [46]
    Chapter 1. Linux Virtual Server Overview - Red Hat Documentation
    Linux Virtual Server (LVS) is a set of integrated software components for balancing the IP load across a set of real servers.Missing: features | Show results with:features
  47. [47]
    Highly Available LVS - Sébastien Han
    Oct 18, 2012 · The Linux Virtual Server is a highly scalable and highly available ... VIP: Virtual IP address, used by the director. RIP: Real IP ...Missing: terminology | Show results with:terminology
  48. [48]
    21. LVS: Persistent Connection (Persistence, Affinity in cisco-speak)
    LVS persistence directs all (tcpip) connection requests from the client to one particular realserver. Each new (tcpip) connection request from the client resets ...
  49. [49]
    Chapter 1. Linux Virtual Server Overview - Red Hat Documentation
    Linux Virtual Server (LVS) is a set of integrated software components for balancing the IP load across a set of real servers. LVS runs on a pair of equally ...
  50. [50]
    [PDF] Linux Virtual Server Tutorial
    The Linux Virtual Server (LVS) uses Layer 4 switching to load balance TCP/UDP services, allowing scaling of internet services beyond a single host.