v3.0.0

Next generation Telemetry Agent for Logs, Metrics and Traces.

March 3, 2024

KNOWLEDGE BASE

Release Notes v3.0.0

Fluent Bit is a Fast and Lightweight Data Processor and Forwarder for Linux, BSD and OSX. We are proud to announce the availability of Fluent Bit v3.0.

For people upgrading from previous versions you must read the Upgrading Notes section of our documentation:

https://docs.fluentbit.io/manual/installation/upgrade_notes

Introduction

Fluent Bit, a CNCF graduated project under the umbrella of Fluentd, announces the availability of v3.0.

In every release, there are many improvements and fixes, on this notes we will refer to the major changes that will make your infrastructure happier ;)

HTTP/2 Support

In today’s world, most modern applications prefer using HTTP/2 for communication with Web Services instead of HTTP/1.1, while compatibility exists, in Fluent Bit we want to deliver support for industry standards like this. Fluent Bit is not only used as a sender but also as an aggregator/collector to receive telemetry data for different types of signals, such as Logs, Metrics and Traces:

With the new addition of HTTP/2 in input/source/receivers that operate on top of HTTP protocol, such as OpenSearch, Splunk, and OpenTelemetry, they now can transparently operate in either HTTP/1.1 or HTTP/2 mode without any changes, but now with better performance due to the HTTP/2 concurrency main feature. The new transport is enabled by default and is up to the client to use or prefer other mechanism, however it can be disabled with the option http2 off in your input plugin configuration section.

Processors

Months ago, we introduced the concept of Processors, which, similar to Filters functionality, allows to enrich or transform the telemetry data that is being received, but they have the following differences:

  • They are not routable. Instead, they are attached directly to an input plugin and run one after the other.
  • If the plugin is running in threaded mode, meaning that it runs in a separate thread, the processor will run there too, significantly reducing the contention that Filters used to generate in the main event loop and thread, as a side effect: better performance.

In this new version we are adding three new processors:

Content Modifier

Similar to the functionality exposed by filters, this new processor aims to unify the logic to manipulate content in Logs and Traces; it also can explicitly operate only on metadata/attributes or the content of the messages itself.

The following example creates a simple pipeline that generates a dummy message with 2 keys, then the content_modifier processor is loaded to work over the logs stream where it will insert a new key called color with value blue. Once the key has been inserted, the next action is to extract and decompose the value of the key http.url in multiple keys by using a regular expression pattern:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
pipeline:
  inputs:
    - name: dummy
      dummy: '{"key1": "123.4", "http.url": "https://www.google.com/search?q=example"}'
      rate: 1

      processors:
        logs:
          - name: content_modifier
            action: insert
            key: "color"
            value: "blue"

          - name: content_modifier
            action: extract
            key: "http.url"
            pattern: ^(?<http_protocol>https?):\/\/(?<http_domain>[^\/\?]+)(?<http_path>\/[^?]*)?(?:\?(?<http_query_params>.*))?
  outputs:
    - name: stdout
      match: '*'
      format: json_lines



The output of that processing will be:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
{
  "date": 1711036844.96304,
  "key1": "123.4",
  "http.url": "https://www.google.com/search?q=example",
  "color": "blue",
  "http_protocol": "https",
  "http_domain": "www.google.com",
  "http_path": "/search",
  "http_query_params": "q=example"
}



Metrics Selector

When collecting metrics from the host, receiving or scrapping them from a remote endpoint, there are many cases where you are not interested in everything that has been collected, metrics selector allows you to specify what metrics to include and what to exclude by using patterns on it name.

In the following example data, generated in a macOS system where Fluent Bit is gathering metrics from the host by using the the node_exporter_metrics input plugin, we have attached a simple processor that will only include metrics that it name starts with node_disk_w (we are using a regular expression to perform further match):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
pipeline:
  inputs:
    - name: node_exporter_metrics
      processors:
        metrics:
          - name: metrics_selector
            action: include
            metric_name: /^node_disk_w.*/
  outputs:
    - name: stdout
      match: "*"
      format: json_lines

Instead of getting the hundreds of metrics generated, the following will only available in the pipeline for processing and delivery:
1
2
3
4
5
6
2024-03-21T21:45:39.028335270Z node_disk_writes_completed_total{device="disk0"} = 21667544
2024-03-21T21:45:39.028342061Z node_disk_written_sectors_total{device="disk0"} = 5289.927734375
2024-03-21T21:45:39.028362895Z node_disk_write_time_seconds_total{device="disk0"} = 1728.3985221549999
2024-03-21T21:45:39.028352270Z node_disk_written_bytes_total{device="disk0"} = 404240666624
2024-03-21T21:45:39.028373228Z node_disk_write_errors_total{device="disk0"} = 0
2024-03-21T21:45:39.028383186Z node_disk_write_retries_total{device="disk0"} = 0

SQL

We don’t believe everybody needs to learn every single processor or filter provided by Fluent Bit to achieve data selection however many of you have some knowledge of SQL (Structured Query Language).The new SQL processor, allows you to define queries by using SQL expressions to select parts of your Logs and Traces which can also be used with conditionals. Here is a simple example of it:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
pipeline:
  inputs:
    - name: dummy
      dummy: '{"key1": "123.4", "http.url": "https://fluentbit.io/search?q=docs"}'

      processors:
        logs:
          - name: content_modifier
            action: extract
            key: "http.url"
            pattern: ^(?<http_protocol>https?):\/\/(?<http_domain>[^\/\?]+)(?<http_path>\/[^?]*)?(?:\?(?<http_query_params>.*))?

          - name: sql
            query: "SELECT new_key AS key, http_domain FROM STREAM;"

  outputs:
    - name : stdout
      match: '*'
      format: json_lines

The dummy plugin, will generate a sample message with two keys key1 and http.url , then the new processor content_modifier will extract the content of the key http.url as a series of key/value pairs by using the regular expression defined in pattern, finally the SQL processor will only select the new key ingested with a new name and only the key http_domainas part of the result:


1
2
3
4
{
  "date": 1711059261.630668,
  "http_domain": "fluentbit.io"
}

Contributors

On every release, there are many people involved doing contributions on different areas like bug reporting, troubleshooting, documentation and coding, without these contributions from the community, the project won’t be the same and won’t be in the good shape that it is now. So THANK YOU! to everyone who takes part of this journey!

Join us

We want to hear about you, our community is growing and you can be part of it!, you can contact us at:

Latest Version

New release on Apr 15, 2024

Fluent Bit v3.0.2 is out!

Check out the Release Notes, read the Updated Documentation or jump directly to the Downloads Section.

We are part of a wide community, no vendor lock-in.