March 25, 2020
Fluent Bit is a Fast and Lightweight Data Processor and Forwarder for Linux, BSD and OSX. We are proud to announce the availability of Fluent Bit v1.4.
For people upgrading from previous versions you must read the Upgrading Notes section of our documentation:
https://docs.fluentbit.io/manual/installation/upgrade_notes
Fluent Bit v1.4 is the next major release and include several improvements:
Keep calm, no changes, just some pending adjustments in our banner and messaging.
Fluent Bit has always been a Fluentd sub-project and since the projects joined the Cloud Native Computing Foundation (CNCF) we had pending the copyright message changes. When you start Fluent Bit now you will see the following banner:
* Copyright (C) 2019-2020 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io
=
As a CNCF sub-project, all contributions belongs to The Fluent Bit Authors
:)
Fluent Bit project adoption keeps growing, as well the number of companies contributing back to it. As part of this grow now we are more maintainers!, here is the list of current and active project maintainers:
Maintainer Name | Components | Company |
---|---|---|
Eduardo Silva | All | Arm Treasure Data |
Masoud Koleini | Stream Processor | Arm |
Fujimoto Seiji | Windows Platform | [Clear Code](https://www.clear-code.com/ |
Wesley Pettit | Amazon Plugins (AWS) | Amazon Web Services |
Cedric Lamoriniere | Datadog Output Plugin | Datadog |
Jonathan Gonzalez V. | PostgreSQL Output Plugin | 2ndQuadrant |
For monitoring purposes overall metrics are good but knowing the current state
of our data ingestion can be very helpful for performance configuration and general troubleshooting
This new version comes with a feature to dump the internals state of the data ingestion, this can be triggered anytime using the Unix signal CONT
, e.g:
kill -CONT `pidof fluent-bit`
For more details about it output and description of each value refer to the official documentation here:
https://docs.fluentbit.io/manual/administration/dump-internals-signal
Our internal logging API has been extended and now every plugin instance can set their preferred logging level through the Log_Level
property. Usage:
[OUTPUT]
Name http
Match *
Log_Level debug
Host 192.168.3.4
Port 9090
We have implemented a native support for AWS Signv4 protocol, that means that now we can sign HTTP requests for Amazon services. This is one of the foundation for our next implementations of Amazon Services plugins (planned for v1.5 release).
Keep reading to learn more about our extended support for AWS Elasticsearch Service.
Our Storage layer based on Chunk I/O library has been improved and upgraded to it latest version v1.0.3. The new version comes with performance improvements and overall optimizations. The number of required system calls has been reduced and disabled some CRC32 checksum calculation when not needed.
In addition, some improvements where done on Fluent Bit logic to decide to which Chunk we should write data every time, this helped to reduce the number of Chunks in the file system and reduce the load generated by the storage layer.
When flushing data through an output plugin that requires network I/O, the default behavior is to perform a new TCP connection, this make sense for cases where the flush time is long, but for a streaming data fashion or for cases where we need to reduce the number of continuous TLS handshakes we have implemented optional KeepAlive support.
KeepAlive mode aims to be used by Output plugins configuration, when enabled, any TCP/TLS connection can be reused. From a configuration perspective we now expose two new properties:
Property | Description | Default |
---|---|---|
KeepAlive | Enable or disable Keepalive mode | Off |
KeepAlive_Timeout | Set an expiration time for unused KeepAlive connections. If a KeepAlive connection is not used after KeepAlive_Timeout seconds, it will be closed. |
10 |
Configuration example:
[OUTPUT]
Name http
Match *
KeepAlive on
KeepAlive_Timeout 10
Format json_lines
When connecting to a third party service using TLS, there are cases where the TLS handshake is never completed by the remote peer. To avoid situations on waiting for a TCP disconnection, now we handle our own timeout to validate that if a TLS handshake has not finished within the first 5
seconds (default value), just drop the connection and retry later.
Configuration example:
[OUTPUT]
Name http
Match *
tls on
tls.verify on
tls.handshake_timeout 5
DISCLAIMER: Fluent Bit Config Maps are NOT related to Kubernetes ConfigMaps
As the project grows we need better ways to alert about configuration errors and better ways to offer help.
This version introduces the concept of Config Maps
which is a mechanism on where Plugins now can register expected configuration properties to the engine, specifying data type and further descriptions. If a given option by the user is not available, the plugin won’t start and the user will get the proper error.
In addition, the new helper from the command line allow us to retrieve all configuration properties available with further details. Take the following example where with the simple command fluent-bit -F kubernetes -h
we can retrieve a full list of options:
HELP
kubernetes filter plugin
DESCRIPTION
Filter to append Kubernetes metadata
OPTIONS
buffer_size buffer size to receive response from API server
> default: 32K, type: size
tls.debug set TLS debug level: 0 (no debug), 1 (error), 2
(state change), 3 (info) and 4 (verbose)
> default: 0, type: integer
tls.verify enable or disable verification of TLS peer
certificate
> default: true, type: boolean
tls.vhost set optional TLS virtual host
> type: string
merge_log merge 'log' key content as individual keys
> default: false, type: boolean
merge_parser specify a 'parser' name to parse the 'log' key
content
> type: string
merge_log_key set the 'key' name where the content of 'key' will
be placed. Only used if the option 'merge_log' is
enabled
> type: string
merge_log_trim remove ending '\n' or '\r' characters from the log
content
> default: true, type: boolean
....
Fluent Bit community continues contributing improvements and extensions, the following is a summary of the new plugins that are part of v1.4 release:
The new AWS Metadata Filter allows to enrich logs with AWS Metadata. For more details about it usage please refer to the documentation here:
https://docs.fluentbit.io/manual/pipeline/filters/aws-metadata
We now expose a new filter called Expect
that aims to be used as a mechanism to test the content of your records after one or multiple modifications. This filter aims to simplify unit testing for the end user and is flexible enough to validate the existence of keys, subkeys and expected values.
Tags are what makes routing possible. Tags are set in the configuration of the Input definitions where the records are generated, but there are certain scenarios where might be useful to modify the Tag in the pipeline so we can perform more advanced and flexible routing.
The rewrite_tag filter, allows to re-emit a record under a new Tag. Once a record has been re-emitted, the original record can be preserved or discarded.
We have added a new option to set the TLS virtual host name through the tls.vhost
option.
Our users have demanded a native integration with one of the famous relational databases: PostgreSQL. On this new version of Fluent Bit, we have introduced a native output plugin for PostgreSQL Server >= 9.4 that takes advantage of the native JSON type.
Configuration example:
|
|
We have implemented native AWS Signv4 protocol so our Elasticsearch output plugin now is compatible with the Amazon Elasticsearch Service.
For more details about it configuration of credentials and AWS mode, please refer to the following documentation section:
https://docs.fluentbit.io/manual/pipeline/outputs/elasticsearch#fluent-bit-amazon-elasticsearch
The Datadog connector has been extended and now it supports HTTP proxies through the configuration property proxy
. For more details refer to the plugin documentation:
https://docs.fluentbit.io/manual/pipeline/outputs/datadog
On every release, there are many people involved doing contributions on different areas like bug reporting, troubleshooting, documentation and coding, without these contributions from the community, the project won’t be the same and won’t be in the good shape that it is now. So THANK YOU! to everyone who takes part of this journey!
We want to hear about you, our community is growing and you can be part of it!, you can contact us at:
Check out the Release Notes, read the Updated Documentation or jump directly to the Downloads Section.
We are part of a wide community, no vendor lock-in.