Fact-checked by Grok 2 weeks ago

Fluentd

Fluentd is an open-source collector designed as a unified layer that decouples sources from backend systems, enabling the collection, , and of log in format across diverse environments. Originally conceived in 2011 by Sadayuki "Sada" Furuhashi, a co-founder of Treasure Data, Inc., Fluentd was open-sourced in October of that year to address challenges in log aggregation for distributed systems, such as inconsistent formats and high resource demands. The project quickly gained traction for its pluggable architecture, which supports over 500 community-contributed plugins for inputs from various sources (e.g., files, HTTP, and system metrics) and outputs to destinations like databases, , and platforms. Written primarily in C for performance-critical components and for extensibility, Fluentd emphasizes reliability through built-in buffering mechanisms (in-memory or on-disk) and handling, while maintaining low resource usage—typically 30-40 MB of memory and up to 13,000 events per second per core. Hosted under the (CNCF) since November 2016, Fluentd achieved graduated status in April 2019, reflecting its maturity and widespread adoption in cloud-native ecosystems. Licensed under Apache 2.0, it is utilized by over 5,000 data-driven companies, with some deployments scaling to collect logs from more than 50,000 servers, making it a cornerstone for observability in , containers, and pipelines. Its design promotes structured data handling without requiring extensive parsing, facilitating seamless integration and analysis in tools like , , and Hadoop.

Introduction

Overview

Fluentd is an open-source data collector designed for building a unified layer that decouples heterogeneous data sources from backend processing and storage systems. This approach enables seamless aggregation of logs, metrics, and traces from diverse origins, such as applications, servers, and services, into a centralized stream for analysis. By standardizing data ingestion and forwarding, Fluentd simplifies in distributed environments without requiring custom integrations for each data pipeline. The primary purpose of Fluentd is to collect, process, and route log data from multiple sources to various or backends, ensuring efficient data flow in complex infrastructures. It emphasizes principles of reliability through built-in buffering mechanisms to handle network failures or overloads, a lightweight implementation that minimizes resource overhead, and ease of use via simple configuration files that require minimal setup for deployment. These attributes make it particularly suitable for high-volume logging in production systems. As a (CNCF) graduated project since April 11, 2019, Fluentd benefits from robust community governance and has been adopted by over 5,000 companies worldwide. It supports more than 500 plugins for extensibility, allowing customization for specific input sources and output destinations. The basic workflow involves ingesting logs through input plugins, applying filters for parsing and enrichment, buffering data for reliable delivery, and routing it to output destinations such as databases or search engines.

History

Fluentd was conceived in 2011 by Sadayuki Furuhashi, a co-founder of Treasure Data, Inc., as an internal tool to unify log aggregation across diverse data sources within the company's cloud-based analytics platform. This initiative addressed the challenges of managing fragmented logging in distributed systems, drawing on Furuhashi's experience in data processing. The project was initially developed at Treasure Data's Mountain View headquarters, reflecting the company's focus on scalable data handling for enterprise applications. The source code for Fluentd was released as in October 2011, primarily implemented in to facilitate and extensibility. This early release quickly attracted developer interest, establishing Fluentd as a foundational tool for . In 2013, Treasure Data secured $5 million in Series A funding led by Sierra Ventures, which bolstered the company's resources to expand development and community support for Fluentd. On November 8, 2016, the (CNCF) accepted Fluentd as an incubating project, aligning it with the growing ecosystem of cloud-native technologies. This milestone enhanced its visibility and governance under a neutral foundation. Fluentd advanced to graduated maturity level on April 11, 2019, becoming the sixth CNCF project to achieve this status after , , Envoy, CoreDNS, and containerd. Post-graduation, Fluentd's development emphasized cloud-native adaptations, including native integrations with for containerized logging workflows. In August 2025, the Fluent Package v6 series was released, introducing two channels: a (LTS) variant for stability and a normal release channel with planned semi-annual updates to incorporate new features and Fluentd core upgrades. Community engagement has since intensified, with significant contributions from major technology firms including , , , , and , driving enhancements in scalability and interoperability.

Technical Foundation

Architecture

Fluentd employs a modular, pluggable that enables extension through a vast of over 1,000 community-contributed plugins, covering inputs for data ingestion, filters for processing, and outputs for routing to destinations. This design decouples data collection from consumption, allowing seamless integration with diverse sources and sinks while maintaining a unified layer based on JSON-formatted . At its core, Fluentd operates via a streamlined flow : events are ingested from sources using input plugins, parsed into structured records, optionally filtered or modified, buffered asynchronously for reliability, formatted as needed, and routed to outputs based on rules. This processes timestamped log records in a sequential yet configurable manner, supporting complex routing through labels and match directives to handle multifaceted needs. Each in Fluentd is structured as a triplet comprising a , , and ; the —a identifier like "app."—denotes the event's origin and drives decisions by matching against and output configurations, while the provides nanosecond-precision timing, and the holds the as a flexible, JSON-like of key-value pairs. This tagging system ensures efficient, tag-based dispatching without requiring rigid schemas, promoting adaptability in dynamic environments. Reliability is bolstered by asynchronous buffering in output plugins, which queues events to mitigate failures such as issues or downstream unavailability; buffer types are configurable, including memory-based for low-latency operations and file-based for durable persistence on disk, enabling retry mechanisms and preventing during transient disruptions. Scalability is achieved through horizontal scaling via multi-process workers, where the system spawns multiple independent processes—each handling subsets of plugins—to leverage multi-core CPUs and distribute load, supporting high-throughput scenarios like containerized deployments in by optimizing resource utilization and throughput up to thousands of events per second per core. Conceptually, the visualizes a flow from input sources (e.g., logs, metrics) to output sinks (e.g., databases, search engines), with buffering and providing that isolates components and facilitates fault-tolerant, scalable aggregation.

Core Components

Fluentd's core components form a modular that processes events from to output, ensuring reliable and routing. These components include input plugins for gathering logs, parsers for structuring data, filters for modification, buffers for queuing, outputs for delivery, formatters for presentation, and a configuration system to define the flow. This operates on events consisting of a , , and record, allowing for flexible handling across diverse sources and destinations. Input plugins serve as the , pulling or receiving from various sources to initiate the . They generate structured events by capturing raw logs through mechanisms such as tailing with the in_tail plugin, which monitors for new entries, or listening on network ports via in_tcp and in_udp for or custom protocols, or handling HTTP requests with in_http. For instance, in_tail reads appended lines from like application logs, assigning tags based on file paths to route events appropriately. These plugins ensure comprehensive coverage of sources without altering the incoming initially. Parsers convert unstructured or semi-structured raw input into discrete, structured events, enabling . Integrated within input or plugins via the <parse> directive, they support formats like for direct key-value extraction or regex-based patterns to dissect log lines into fields such as , host, and message. For example, the built-in nginx parser handles access logs by matching predefined patterns to populate event records, while the grok parser uses regular expressions for custom Apache-style logs. This step is crucial for transforming heterogeneous log formats into a uniform hash representation. Filters process events after , modifying, enriching, or discarding them to refine the . Applied via <filter> directives that match event tags, they perform operations like adding (e.g., via record_transformer), dropping invalid entries, or reformatting fields. The grep filter, for instance, excludes events matching patterns such as user logouts in access logs, using inclusion or exclusion rules on specific keys. Multiple filters can chain together for complex transformations, ensuring only relevant, augmented events proceed. Buffers manage event queuing between filters and outputs, providing reliability against failures or high loads by temporarily storing data. They organize events into chunks and support types like memory-based buf_memory for speed in low-risk scenarios or file-based buf_file for across restarts, as used in outputs like out_s3. Retry logic employs —starting at 1 second and doubling up to a configurable maximum like 72 hours—with options for indefinite retries or evacuation to backup directories on unrecoverable errors. This decouples ingestion from delivery, preventing during transient issues. Output plugins route filtered and buffered events to final destinations, completing the pipeline by forwarding data to storage or analysis systems. Defined in <match> directives that pattern-match tags, they include out_forward for relaying to other Fluentd instances, out_elasticsearch for indexing in search engines, or out_s3 for archival in . For example, out_stdout simply prints events to the console for , while others handle batching and for efficiency. Outputs integrate buffers to manage delivery semantics reliably. Formatters customize the structure of events before output, ensuring compatibility with destination formats. Specified in <format> sections within outputs, they transform records into serialized representations like JSON for human-readable logs or the binary for compact, efficient transmission—Fluentd's internal default. The json formatter, for instance, outputs each as a single line excluding tags unless injected, while the default tab-separated format includes time, , and for . This allows tailored presentation without altering core event data. Configuration basics tie these components together using configuration files with a directive-based syntax, parsed at startup to define the pipeline. Directives like <source> for inputs, <filter> for processing, and <match> for outputs use tag patterns (e.g., wildcards like **.* for all events) to route flows declaratively. An example configuration might specify <source> @type tail <parse> @type json </parse> </source> followed by <filter> @type record_transformer </filter> and <match> @type forward </match>, enabling straightforward setup of multi-stage pipelines. Fluentd reloads configurations dynamically without downtime via signals.

Extensibility and Features

Plugin System

Fluentd's extensibility relies on a modular plugin system that allows users to customize , processing, and forwarding pipelines. Plugins integrate seamlessly into the core event routing mechanism, enabling the , , and of log data from diverse sources to various destinations. This supports over 500 community-contributed plugins, which are distributed via and enhance Fluentd's capabilities without altering its lightweight core. The plugin system categorizes extensions into specific types to handle different stages of the data pipeline:
  • Input plugins (prefixed with in_): Responsible for data ingestion, pulling events from external sources such as files, network sockets, or APIs.
  • Output plugins (prefixed with out_): Handle data export, routing events to destinations like databases, , or message queues.
  • Filter plugins (prefixed with filter_): and modify event streams, such as enriching records or dropping irrelevant events based on criteria.
  • Parser plugins (prefixed with parser_): Structure into Fluentd's -based event format for easier handling.
  • Formatter plugins (prefixed with formatter_): Shape output data into formats suitable for specific destinations, like or message packs.
  • Buffer plugins (prefixed with buf_): Manage queuing and retry logic to ensure reliable data transmission under varying loads.
  • Storage plugins (prefixed with storage_): Provide persistence options for buffering data on disk or in memory.
Plugins are developed in and must implement Fluentd's defined interfaces to ensure compatibility with the event router. Developers name plugin files as <type>_<name>.rb (e.g., in_tail.rb) and can generate skeletons using the fluent-plugin-generate tool for rapid prototyping. These plugins are packaged as for easy distribution and integration. Installation occurs via the fluent-gem install <plugin-gem-name> command, such as fluent-gem install fluent-plugin-elasticsearch, which handles dependencies automatically. For the td-agent distribution, use td-agent-gem instead. Plugins are then referenced in configuration files using the @type directive, for example, <source> @type tail </source> to activate an input plugin. Version management is supported through Gemfile specifications or explicit version flags to maintain compatibility across Fluentd releases. Key examples include the in_tail input , which monitors and tails files similar to the Unix tail command, capturing new entries in real-time; the out_elasticsearch output , which indexes events into for search and analytics; and the filter_grep filter , which selects or excludes events matching patterns in specified fields. Maintenance of the plugin ecosystem is community-driven through repositories, including the fluent-plugins-nursery organization, which coordinates updates, bug fixes, and compatibility testing for various Fluentd versions such as and v0.12. Developers contribute via pull requests, ensuring plugins remain aligned with evolving core APIs and security standards.

Key Features

Fluentd provides a unified layer that standardizes log formats across diverse and heterogeneous data sources by structuring events in format, enabling seamless collection, filtering, and routing without custom parsing for each source. This approach decouples inputs from outputs, allowing logs from systems like applications, servers, and cloud services to be normalized for consistent processing and analysis. The tool emphasizes high reliability through features such as automatic retries with (default intervals starting at 1 second and doubling up to a configurable maximum), dead letter queues via buffer evacuation for failed chunks since version 1.19.0, and handling that manages chunks in stages to prevent during high loads or failures. These mechanisms ensure robust and , supporting buffering in memory or files to handle network issues or downstream outages gracefully. Fluentd maintains a low resource footprint, typically requiring 30-40 MB of RAM for a instance, while achieving performance of around 13,000 events per second per on standard hardware, scalable to higher volumes in multi-process or distributed deployments. This efficiency makes it suitable for resource-constrained environments without sacrificing throughput for log aggregation tasks. Flexibility is a strength, with tag-based routing that allows complex configurations by matching events via directives like <match> in the , directing logs to specific filters or outputs based on tags assigned at . Beyond logs, it extends to metrics via built-in metrics plugins and traces through compatibility with OpenTelemetry protocols using dedicated plugins. Security features include TLS support for encrypted transport in input and output plugins, configurable via the <transport> section with options for TLS versions (including 1.3 where supported) and certificate validation. is handled through plugins, such as shared key mechanisms in the forward output for secure node-to-node communication. Integration with modern infrastructures is native, including deployment as a DaemonSet for cluster-wide log collection from pods and nodes. It also aligns with OpenTelemetry standards, enabling unified handling of logs, metrics, and traces in observability pipelines via protocol-compatible plugins. As of 2025, recent enhancements in version 1.19.0 include improved buffer evacuation for better error recovery in multi-tenant setups, for optimized cloud-native , and enhanced metrics , further bolstering reliability and in distributed environments.

Adoption and Applications

Notable Users

Fluentd has been adopted by major technology companies, including , , , , and , which have contributed to its development since its CNCF graduation in 2019. These organizations leverage Fluentd for large-scale collection and processing in production environments. As of the latest available data, over 5,000 data-driven companies worldwide have relied on Fluentd, with its largest deployments handling logs from more than servers. Adoption spans key industry sectors such as cloud providers like AWS and Cloud, financial institutions including major banks, and tech giants focused on pipelines. Within the broader Fluent ecosystem, which includes Fluent Bit, containerized downloads have exceeded 15 billion, underscoring its trust for production-scale operations. As of 2025, post-CNCF graduation, community contributions have grown significantly, with over 219,000 total contributions from 8,954 organizations, including fast-growing startups and enterprises enhancing its extensibility and reliability. Case studies include DeFacto, which processes 27.5 million events per 100 hours as of 2023, and Bink, supporting a platform for over 500,000 users as of 2022. In recent years, there has been a noted trend toward using the companion project Fluent Bit for lighter-weight deployments in resource-constrained environments.

Use Cases

Fluentd is widely applied in centralized log aggregation scenarios, where it collects logs from distributed systems such as clusters and forwards them to backends like for unified storage and analysis. In these setups, Fluentd deploys as a DaemonSet or to container logs, parse them into structured format, and route them reliably to central repositories, enabling efficient troubleshooting across large-scale environments. In cloud migration efforts, Fluentd facilitates log routing across multi-cloud environments, such as AWS and , by leveraging its buffering mechanisms to ensure data reliability during transfers. Its file- and memory-based buffers temporarily store events to handle network disruptions or backend unavailability, preventing loss in hybrid or transitioning infrastructures while supporting output plugins for diverse targets. Fluentd contributes to observability pipelines by integrating logs with metrics and traces, forming a comprehensive for full- visibility. Through its JSON-structured event handling, it enriches log streams for with other data, allowing teams to query and on combined signals in tools like or Prometheus-integrated systems. For edge-to-cloud forwarding, Fluentd manages high-volume logs from devices or containers, collecting data at the edge—such as from setups—and streaming it to cloud services for deeper processing. This push-model approach uses input plugins like HTTP or forward protocols to aggregate and buffer data from resource-constrained environments before reliable transmission, minimizing latency in distributed deployments. Custom processing with Fluentd often involves enriching logs with metadata to support security analytics or compliance auditing, using to add contextual details like hostnames or timestamps. The , for instance, dynamically appends fields such as pod labels or geolocation data to event records, enabling advanced querying for threat detection or regulatory reporting without altering source applications. In architectures, Fluentd aids debugging by aggregating service-specific logs into a central pipeline, applying filters to isolate issues across interconnected components. For , its persistent buffers ensure log durability during outages, allowing resumption of forwarding once systems stabilize and maintaining audit trails for post-incident analysis.

Fluent Bit

Fluent Bit is a lightweight log processor and forwarder designed for resource-constrained environments, such as systems and devices. Developed in C by the team at Treasure Data, it was created in 2014 as a complementary tool to Fluentd, addressing the need for a more efficient alternative in scenarios where memory and CPU resources are limited. Unlike Fluentd, which is implemented in and requires a larger , Fluent Bit maintains a minimal of under 1 MB, compared to Fluentd's typical 30-40 MB usage, enabling faster parsing and processing while supporting similar pipelines. The project follows a modular with inputs for collecting data from various sources, filters for enrichment and transformation, and outputs for routing to destinations, optimized particularly for forwarding logs, metrics, and traces to central Fluentd instances or direct backends like or . This allows Fluent Bit to handle data efficiently in distributed systems, making it suitable for containerized and applications. As part of the Fluent ecosystem, it shares conceptual origins with Fluentd but emphasizes in low-overhead deployments. Fluent Bit entered the CNCF as an incubating project in and achieved graduated status in 2021, reflecting its maturity and widespread adoption within the cloud native community. By 2025, it has surpassed 15 billion downloads, underscoring its and reliability across millions of daily deployments. Often used as an edge agent, Fluent Bit collects data from peripherals and ships it to aggregated Fluentd servers for further processing, enhancing overall logging architectures in hybrid environments. Recent enhancements as of 2025 include improved OpenTelemetry Protocol (OTLP) integration for unified logs, metrics, and traces, along with expanded multi-cloud support through plugins for major providers like AWS, , and Google Cloud. These updates bolster its role in modern stacks, enabling seamless data routing across diverse infrastructures without compromising on efficiency.

Comparison with Alternatives

Fluentd, implemented in , offers a lighter footprint compared to Logstash, which relies on the (JVM) and , leading to higher resource demands such as approximately 120 MB of usage for Logstash versus Fluentd's around 40 MB. This makes Fluentd more suitable for environments where efficiency is critical, though both tools can handle high throughput exceeding 10,000 events per second. Logstash excels in parsing unstructured logs through its filter plugin, which uses regular expressions for , while Fluentd employs a tag-based routing system that simplifies complex data flows without the overhead of conditional if-then logic. In terms of extensibility, Fluentd benefits from over 500 community-contributed plugins for inputs, filters, and outputs, providing broader integration options than Logstash's approximately 200 centralized plugins. Compared to , a Rust-based tool, Fluentd provides a more mature and extensive with over plugins, enabling intricate routing and transformations that suit scenarios requiring diverse integrations. , however, prioritizes efficiency and built-in reliability features like delivery guarantees and management, achieving higher collection rates—often over twice that of similar tools in benchmarks—while using less , such as 0.2 to 0.5 times the footprint of alternatives under heavy workloads. With around 46 sources and a comparable number of sinks, Vector's curated components support high-performance but may require more custom development for highly specialized routing compared to Fluentd's plugin-driven flexibility. Fluentd serves as a central for processing and routing, consuming higher resources (around 40-60 MB) and supporting full-featured transformations via its extensive plugin system, whereas Fluent Bit functions primarily as a lightweight forwarder with minimal overhead (under 1 MB memory) and around 100 built-in plugins focused on basic collection and shipping. This distinction positions Fluentd for backend aggregation in data centers or clusters handling complex enrichment, while Fluent Bit is optimized for edge devices or resource-constrained agents like or sidecars. Migration between the two is facilitated by configuration translation tools that map Fluentd's Ruby-based directives to Fluent Bit's simpler syntax, allowing hybrid deployments where Fluent Bit forwards data to Fluentd for deeper processing. Selection criteria for Fluentd versus alternatives depend on deployment needs: opt for Fluentd in backend roles requiring robust plugin support and vendor-neutral aggregation, use Fluent Bit as an agent for low-overhead forwarding in distributed systems, and choose for performance-critical pipelines emphasizing Rust's efficiency and native reliability. Logstash fits Stack ecosystems with strong parsing needs but at a resource cost. Hybrid setups, such as Fluent Bit agents feeding into Fluentd aggregators, are common for scalable in cloud-native environments.
AspectFluentd StrengthsLogstash StrengthsVector StrengthsFluent Bit Strengths
Resource UsageLow memory (~40 MB); Ruby-based efficiencyRobust JVM parsing but high (~120 MB)Ultra-low memory; high throughputMinimal (~1 MB); edge-optimized
Ecosystem500+ plugins for complex routing200+ plugins; for logs46+ sources/sinks; built-in reliability100+ built-in; simple forwarding
Best ForCentral aggregation, processing integration, unstructured parsingHigh-performance pipelinesLightweight agents, IoT/edge shipping
LimitationsSlower than tools in benchmarksJVM overhead; conditional routingFewer plugins for niche integrationsLimited complex transformations

References

  1. [1]
    What is Fluentd?
    Fluentd was conceived by Sadayuki "Sada" Furuhashi in 2011. Sada is a co-founder of Treasure Data, Inc., a primary sponsor of the Fluentd project. Since being ...Fluentd Is An Open Source... · Unified Logging With Json · Pluggable Architecture
  2. [2]
    Fluentd | CNCF
    Fluentd was accepted to CNCF on November 8, 2016 at the Incubating maturity level and then moved to the Graduated maturity level on April 11, 2019.
  3. [3]
    Fluentd | Open Source Data Collector | Unified Logging Layer
    Fluentd is an open source data collector for unified logging layer. Fluentd allows you to unify data collection and consumption for a better use and ...Overview · Why use Fluentd? · Download · Plug-ins
  4. [4]
    Cloud Native Computing Foundation Announces Fluentd Graduation
    Apr 11, 2019 · Fluentd is its sixth project to graduate, following Kubernetes, Prometheus, Envoy, CoreDNS and containerd.
  5. [5]
    Treasure Data Closes $5M Series A Financing - FinSMEs
    Jul 24, 2013 · Treasure Data, a cloud-based analytics service, closed a $5m Series A financing. The round was led by Sierra Ventures.Missing: 5 million
  6. [6]
    Cloud Native Computing Foundation announces Fluentd graduation
    Fluentd is its sixth project to graduate, following Kubernetes, Prometheus, Envoy, CoreDNS and containerd.Missing: 8 | Show results with:8
  7. [7]
    Fluentd joins the Cloud Native Computing Foundation
    Nov 18, 2016 · ... CNCF. One of the biggest benefits of CNCF compared to other foundations are: Flexibility: development and roadmap continue being handled by ...
  8. [8]
    Scheduled support lifecycle announcement about Fluent Package v6
    Aug 29, 2024 · In short, we will ship fluent-package v6 in Aug, 2025. We keeps two release channels as follows: Here is the difference of these channels.
  9. [9]
    Fluentd Project Journey Report | CNCF
    Apr 1, 2020 · This report assesses the state of the Fluentd project and how CNCF has impacted its progress and growth. Without access to a multiverse to play ...
  10. [10]
    Life of a Fluentd event
    Jan 20, 2025 · The following article gives a general overview of how events are processed by Fluentd with examples. It covers the complete lifecycle including Setup, Inputs, ...Basic Setup · Event Structure · Filters · Labels
  11. [11]
    Multi Process Workers - Fluentd Doc
    Jul 8, 2022 · This feature launches two or more fluentd workers to utilize multiple CPU powers. This feature can simply replace fluent-plugin-multiprocess .Missing: scalability | Show results with:scalability
  12. [12]
    Parser Plugins - Fluentd Doc
    Jun 11, 2025 · Parser plugins in Fluentd allow custom data formats to be parsed when standard input plugins cannot, using a pluggable system.How To Use · List of Built-in Parsers
  13. [13]
    Config: Parse Section - Fluentd Doc
    Apr 15, 2024 · The `<parse>` section in Fluentd specifies how to parse raw data, and can be under `<source>`, `<match>`, or `<filter>`, using parser plugins ...
  14. [14]
    Buffer Plugins | Fluentd
    ### Summary of Buffers in Fluentd
  15. [15]
    Formatter Plugins - Fluentd Doc
    Feb 12, 2025 · Formatter plugins in Fluentd allow users to extend and reuse custom output formats when the default output format does not meet their needs.Missing: documentation | Show results with:documentation
  16. [16]
    Config: Format Section - Fluentd Doc
    Jun 3, 2021 · The `<format>` section in Fluentd specifies how to format records, using `@type` to define the formatter plugin, and can be under `<match>` or ...Format Section Overview · Formatter Plugin Type
  17. [17]
    Config File Syntax - Fluentd Doc
    May 26, 2025 · The configuration file allows the user to control the input and output behavior of Fluentd by 1) selecting input and output plugins; and, 2) specifying the ...
  18. [18]
    Plugin Development - Fluentd Doc
    Jun 20, 2024 · With fluentd, you can install and create custom plugins. To install or create a custom plugin, the file name need to be <TYPE>_<NAME>.rb .Overview · Writing Plugins · Writing Tests for Plugins · Writing Documents for Plugins
  19. [19]
    Plugin Management - Fluentd Doc
    Oct 20, 2021 · This article explains how to manage Fluentd plugins, including adding third-party plugins. fluent-gem The fluent-gem command is used to install Fluentd plugins.fluent-gem · Do not use unreliable plugins · p option · Plugin Version Management
  20. [20]
    Input Plugins - Fluentd Doc
    Aug 30, 2021 · Input plugins extend Fluentd to retrieve and pull event logs from the external sources. An input plugin typically creates a thread, socket, and a listening ...How to Write Input Plugin · Tail · Forward · SyslogMissing: system | Show results with:system
  21. [21]
    Output Plugins - Fluentd Doc
    Sep 5, 2022 · Fluentd v1.0 output plugins have three (3) buffering and flushing modes ... Non-Buffered mode does not buffer data and write out results.How to Write Output Plugin · Elasticsearch · Rewrite_tag_filter · Opensearch
  22. [22]
    grep - Fluentd Doc
    Jun 3, 2021 · The filter_grep filter plugin "greps" events by the values of specified fields. It is included in the Fluentd's core.Example Configurations · or> Directive · regexp> Directive · regexpN
  23. [23]
    fluent-plugins-nursery - GitHub
    Collaborate to maintain Fluentd plugins. fluent-plugins-nursery has 48 repositories available. Follow their code on GitHub.Missing: maintenance | Show results with:maintenance
  24. [24]
    Metrics Plugins - Fluentd Doc
    Oct 22, 2024 · Fluentd has a pluggable system called Metrics that lets a plugin store and reuse its internal state as metrics instances.
  25. [25]
    List of All Plugins - Fluentd
    Go here to browse the plugins by category. Input/Output plugin | Filter plugin | Parser plugin | Formatter plugin | Obsoleted plugin. Input / Output plugins:.
  26. [26]
    Config: Transport Section - Fluentd Doc
    Dec 3, 2024 · To support the old style, fluentd accepts TLS1_1 and TLSv1_1 values. NOTE: TLS1_3 is available when your system supports TLS 1.3. Signed ...General Setting · TLS Setting · Signed Public CA Parameters
  27. [27]
    forward - Fluentd Doc
    Jul 30, 2025 · The out_forward Buffered Output plugin forwards events to other fluentd nodes. This plugin supports load-balancing and automatic fail-over (ie active-active ...<|control11|><|separator|>
  28. [28]
    Fluentd input/output plugin to forward OpenTelemetry Protocol data.
    This plugin emits Fluentd's metric data that conforms to the OpenTelemetry Protocol. To output the data, it requires to use output opentelemetry plugin. Root ...
  29. [29]
    Fluentd v1.19.0 has been released
    Aug 6, 2025 · v1.19.0 includes buffer evacuation, improved corruption detection, enhanced metrics, zstd compression, and a switch to the json gem for ...
  30. [30]
    Fluentd has Graduated!
    Apr 11, 2019 · Fluentd is a Cloud Native Computing Foundation (CNCF) graduated project. All components are available under the Apache 2 License. Recent Tweets ...
  31. [31]
    Top Companies Using Fluentd - ZoomInfo
    What companies are using Fluentd? From the data gathered in Zoominfo, Amazon, UnitedHealth Group, Microsoft, BP, JPMorgan Chase, are using the Fluentd ...
  32. [32]
    Fluentd Rides Wave of Roll-Your-Own Observability - Datanami
    Jan 18, 2023 · It became a Graduated Project at the CNCF in 2019. In 2015, Treasure Data released Fluent Bit, which is a smaller, faster version of Fluentd.
  33. [33]
    fluentbit
    Deployed Over Fifteen Billion Times. An End to End Observability Pipeline. Fluent Bit is a super fast, lightweight, and highly scalable logging, metrics, ...Documentation · Get started with Fluent Bit · How It Works · Release NotesMissing: total | Show results with:total
  34. [34]
    Logging Architecture
    ### Summary of Using Fluentd for Logging in Kubernetes (Official Kubernetes Docs)
  35. [35]
    Overview - Fluentd Doc
    Jan 21, 2025 · Fluentd is a fully free and fully open-source log collector that instantly enables you to have a 'Log Everything' architecture with 125+ types of systems.
  36. [36]
    IoT Data Logger - Fluentd Doc
    Jul 23, 2019 · This article introduces how to transport sensor data from Raspberry Pi to the cloud, using Fluentd as the data collector. For the cloud side, we ...Missing: edge | Show results with:edge
  37. [37]
    record_transformer | Fluentd
    ### Summary: Using `record_transformer` for Enriching Logs with Metadata
  38. [38]
    Centralized Application Logging - Fluentd
    Use Case: Centralized App Logging​​ Fluentd decouples application logging from backend systems by the unified logging layer. This layer allows developers and ...Missing: kubernetes migration observability
  39. [39]
    A brief history of Fluent Bit
    Sep 25, 2025 · Eduardo Silva created Fluent Bit, a new open source solution, written from scratch and available under the terms of the Apache License v2.0.
  40. [40]
    How to Collect, Process, and Ship Log Data with Fluentd - Better Stack
    May 30, 2025 · Fluentd is a robust, open-source log shipper developed by Treasure Data. It excels at capturing logs from various sources, unifying them for ...How Fluentd Works · Parsing Json Logs With... · Monitoring Fluentd Health...
  41. [41]
    FluentD vs FluentBit - Choosing the Right Log Collector - SigNoz
    Sep 3, 2024 · Internet of Things (IoT): Fluentd and Fluent Bit can be used to collect and process data from IoT devices and forward it to a centralized ...
  42. [42]
    What is Fluent Bit? | Fluent Bit: Official Manual
    Sep 25, 2025 · Fluent Bit is an open-source telemetry agent specifically designed to efficiently handle the challenges of collecting and processing telemetry data.
  43. [43]
    How It Works - fluentbit
    Works for Logs, Metrics & Traces​​ format from your server. All events are automatically tagged to determine filtering, routing, parsing, modification and output ...
  44. [44]
    The beginner's guide to the CNCF landscape
    Nov 5, 2018 · Fluentd is more efficient in CPU and memory usage, but has more limited features than Fluentd. Fluentd was originally developed by Treasuredata ...The Projects Themselves · Logging And Tracing · Security<|control11|><|separator|>
  45. [45]
    Graduated Project Lightning Talk: Fluent Bit - Eduardo Silva, Calyptia
    May 14, 2021 · Don't miss out! Join us at our upcoming event: KubeCon + CloudNativeCon North America 2021 ... CNCF-hosted projects. Graduated Project Lightning ...
  46. [46]
    Fluent Bit v4.0: Celebrating new features and 10th anniversary | CNCF
    Apr 25, 2025 · With over 15 billion downloads, integration with every major cloud provider, and adoption by thousands of enterprises, Fluent Bit has become ...Missing: total | Show results with:total
  47. [47]
    v4.0.4 - fluentbit
    Jul 9, 2025 · This release brings significant improvements to OpenTelemetry support, including a new internal OTLP interface and enhanced Lua scripting with ...<|control11|><|separator|>
  48. [48]
    Fluentd to Fluent Bit: A migration guide | CNCF
    Oct 1, 2025 · With Fluentd, these custom plugins are “Ruby Gems” that you can download and install into existing or new installations or deployments. With ...
  49. [49]
    Fluentd vs Logstash: A Comparison of Log Collectors - Logz.io
    Nov 18, 2015 · On the other hand, Fluentd's tag-based routing allows complex routing to be expressed clearly. For example, the following configuration ...
  50. [50]
    Logstash, Fluentd, Fluent Bit, or Vector? How to choose the right ...
    Feb 10, 2022 · Fluentd is a log collector with a small memory footprint that handles various log sources and destinations. ... Also, Fluent Bit can scale and ...
  51. [51]
    Comparing Vector, Fluent Bit, Fluentd performance | by Ajay Gupta
    Sep 9, 2021 · We will compare the performance of log collectors Fluentd, Fluent Bit, and Vector based on log-collection rate, CPU, and memory.<|separator|>
  52. [52]
    Components - Sources, Transforms, and Sinks - Vector
    Vector Components. Components enable you to collect, transform, and route data with ease. Sources count 46. AMQP source.
  53. [53]
    Fluentd and Fluent Bit | Fluent Bit: Official Manual
    Sep 25, 2025 · Fluentd is a full-scale ecosystem, while Fluent Bit is built on its ideas and is considered the next-generation solution, with higher ...
  54. [54]
    Fluentd vs. Fluent Bit: Side by Side Comparison - Logz.io
    Jun 28, 2018 · A vanilla Fluentd deployment will run on ~40MB of memory and is capable of processing above 10,000 events per second. Adding new inputs or ...Missing: millions | Show results with:millions