FRRouting
FRRouting (FRR) is a free and open-source software suite that implements and manages various IPv4 and IPv6 routing protocols, designed to integrate with native IP networking stacks on Linux and Unix-like platforms.[1] It serves diverse applications, from connecting hosts, virtual machines, and containers to enabling LAN switching and Internet peering, while supporting nearly all Linux and BSD distributions across modern CPU architectures.[2] Developed as a high-performance solution, FRR can handle full Internet routing tables and is suitable for use in session border controllers (SBCs), commercial routers, and other networking environments.[3] FRR originated as a fork of the Quagga routing protocol suite in 2017, initiated by experienced Quagga developers to build upon and improve its foundational architecture for a more robust and maintainable routing stack.[1] Distributed under the GNU General Public License version 2 (GPLv2) or later, its development process draws inspiration from the Linux kernel model, emphasizing community contributions for features, bug fixes, and documentation.[3] Since its inception, FRR has evolved through collaborative efforts from organizations such as 6WIND, NVIDIA, and VMware, fostering a vibrant open-source ecosystem.[1] At its core, FRR provides comprehensive support for standard routing protocols, including BGP (Border Gateway Protocol), OSPF (Open Shortest Path First) for both IPv4 (OSPFv2) and IPv6 (OSPFv3), RIP (Routing Information Protocol) including RIPv1, RIPv2, and RIPng, IS-IS (Intermediate System to Intermediate System), PIM (Protocol Independent Multicast) with Sparse Mode (SM) and Multicast Source Discovery Protocol (MSDP), LDP (Label Distribution Protocol), BFD (Bidirectional Forwarding Detection), Babel, PBR (Policy-Based Routing), OpenFabric, and VRRP (Virtual Router Redundancy Protocol).[2] It also offers implementations for EIGRP (Enhanced Interior Gateway Routing Protocol) and NHRP (Next Hop Resolution Protocol), with platform-specific feature availability detailed in its official matrix.[4] These protocols enable FRR to manage complex routing topologies efficiently, supporting both unicast and multicast traffic in production networks.[1] Widely adopted in real-world deployments, FRR powers routing infrastructure for Internet service providers (ISPs), software-as-a-service (SaaS) providers, web-scale businesses, hyperscale services, and Fortune 500 private clouds, as well as universities, research labs, and government entities.[3] Its modular design allows for flexible integration into various operating environments, making it a preferred choice for open networking initiatives and contributing to its status as a modern successor to legacy routing daemons.[1]Introduction
Overview
FRRouting (FRR) is a free and open-source IP routing protocol suite for Linux and Unix platforms that implements dynamic routing protocols to exchange routing information with other routers, make policy decisions, and install routes into the operating system kernel for packet forwarding.[5][2] It functions as a general-purpose routing stack suitable for connecting hosts, virtual machines, and containers, and is deployed in diverse settings such as home networks, data centers, and Internet exchange points to manage network traffic and connectivity.[1][5] FRR supports static and dynamic routing alongside address management and router advertisements, delivering resiliency through its modular daemon-based architecture that allows independent operation of components.[5] The project originated as a fork of Quagga and saw its initial release on March 4, 2017; the latest stable version as of November 2025 is 10.5.0, released November 9, 2025.[6][7]Platforms and Licensing
FRRouting (FRR) primarily supports Unix-like operating systems, with full compatibility on GNU/Linux distributions and BSD variants including FreeBSD, NetBSD, and OpenBSD. These platforms enable seamless integration with standard networking stacks, such as Linux's Netlink interface for kernel routing updates. While official support is limited to these environments, partial functionality can be achieved on macOS with effort.[5] There is no official support for Microsoft Windows, as FRR is designed for Unix and Linux ecosystems.[1] In terms of hardware and resource requirements, FRR is lightweight and can operate on low-resource systems such as single-board computers like Raspberry Pi for basic routing tasks. For production deployments handling larger routing tables or high peer counts—such as full Internet BGP feeds—more robust configurations are recommended, including at least 4 GB of RAM and a quad-core CPU to ensure performance and stability.[5] The software scales effectively to high-performance hardware, supporting deployments from low-cost single-board computers to enterprise-grade routers without architectural limitations.[5] FRR is distributed under the GNU General Public License version 2 (GPLv2) or later, which permits free use, modification, and redistribution while requiring that any derivative works also be licensed under GPLv2 or later or compatible terms.[8] This open-source licensing model facilitates community contributions and integration into commercial products, provided that source code for modifications is made available if distributed.[8] Users must comply with GPLv2 or later conditions to avoid proprietary extensions that could restrict broader adoption. The design of FRR emphasizes portability, with platform-dependent code primarily confined to kernel interface abstractions in the zebra daemon, making it straightforward to adapt to new Unix-like systems. This modular approach minimizes recompilation needs and supports cross-platform development, though full feature parity depends on underlying OS capabilities.Architecture
Core Components
FRRouting employs a modular architecture composed of multiple daemons operating as independent processes, which enhances fault isolation by preventing a failure in one component from affecting others and improves scalability through efficient resource allocation across diverse network environments.[9] This design allows protocol-specific daemons, such as those for BGP or OSPF, to communicate with a central manager while maintaining separation for robustness.[5] At the core of this architecture is the Zebra daemon, which serves as the central routing manager responsible for interfacing with the operating system kernel, maintaining the Routing Information Base (RIB), computing the Forwarding Information Base (FIB), and facilitating route redistribution between protocols.[9] Zebra receives route updates from various protocol daemons, applies selection policies to determine optimal paths, and pushes these to the kernel for actual packet forwarding.[10] It also handles interface management and event notifications, ensuring synchronized routing state across the system.[9] The RIB in Zebra acts as a comprehensive repository that aggregates and stores all routing information received from connected protocols and static configurations, enabling Zebra to perform route selection and conflict resolution.[9] In contrast, the FIB represents the optimized subset of routes derived from the RIB, which Zebra installs into the kernel's forwarding table to guide hardware-level packet decisions.[10] FRRouting supports Equal-Cost Multi-Path (ECMP) routing in the FIB, accommodating up to 64 paths per route by default, with configurable limits to balance load across multiple next hops.[9] Kernel integration is achieved primarily through APIs such as Netlink on Linux, where Zebra installs and withdraws routes, manages interface states, and responds to events like link status changes or address updates.[10] This mechanism ensures real-time synchronization between FRRouting's internal structures and the kernel's forwarding plane, supporting features like Virtual Routing and Forwarding (VRF) for multi-tenancy.[9]Integration and Management
FRRouting provides a unified management interface through vtysh, an integrated shell that offers CLI access to all daemons in a single session, enabling users to configure and monitor routing protocols seamlessly without switching between individual daemon interfaces.[11] Enabled by default during compilation, vtysh connects to each daemon via Unix domain sockets in/var/run/frr, allowing commands to be executed across the suite as if interacting with a monolithic router.[11] For unified setup, configurations are stored in a single integrated file, /etc/frr/frr.conf, which aggregates settings from all daemons, applied via vtysh -b or the write integrated command after daemon startup to ensure consistency.[11] This approach simplifies administration by centralizing changes, with watchfrr handling file permissions and ownership to prevent conflicts.[11]
External integrations in FRRouting facilitate interaction with hardware and programmatic systems, primarily through the Forwarding Plane Manager (FPM), which enables Zebra to push routing information to external forwarding planes such as hardware ASICs or DPDK-based dataplanes.[12] FPM operates as a Zebra module, supporting Netlink or Protobuf encodings over TCP (default port 2620), where route updates are framed with headers for reliable transmission and status feedback, including offload success or failure from the dataplane.[12] Additionally, the northbound API leverages YANG models to provide a model-driven interface for programmatic configuration, supporting multiple protocols like NETCONF and RESTCONF through libyang, with callbacks ensuring API-agnostic daemon code and atomic transactions for reliable updates.[13] This architecture allows external orchestrators to manage FRRouting via standard YANG schemas, mirroring CLI structures for compatibility while enabling features like rollback logs.[13]
Monitoring and debugging capabilities in FRRouting include SNMP support compliant with key RFCs, where daemons act as AgentX subagents (per RFC 2741) to an external SNMP agent like net-snmp, exposing MIBs for protocols such as BGP (RFC 4273) and OSPF without hosting the agent itself.[14] Configuration requires compile-time enabling and runtime agentx commands, allowing traps for events like peer state changes or VRF interface updates (RFC 4382).[14] For custom automation, Lua 5.3 scripting extends functionality via hooks like on_rib_process_dplane_results, loaded dynamically from /etc/frr/scripts/ without restarts, enabling tasks such as logging route changes or decision logic.[15] Logging and debugging are managed through commands like log file FILENAME [LEVEL] for file-based output with levels from emergencies to debugging, log syslog [LEVEL] for system integration, and debug routemap for protocol-specific traces, with show logging providing configuration overviews and filters like log filter-text WORD refining outputs.[16]
Resiliency features in FRRouting enhance operational continuity, with graceful restart capabilities in Zebra allowing route preservation during daemon restarts by reading kernel routes on startup and optionally delaying sweeps (via -K TIME), minimizing disruptions as peers maintain forwarding state.[9] VRF support leverages Linux network namespaces, mapping them to VRF contexts (enabled via -w or legacy -n), isolating routing tables for multi-tenant environments and enabling fault isolation without global impacts.[9] Dataplane programming options, integrated via Zebra's framework, support diverse backends like DPDK (with -M dplane_dpdk) for direct FIB updates using rte_flow APIs or PBR nexthop resolution, ensuring resilient route installation across kernel, hardware, or user-space planes.[9]