Fluent Bit / Articles

Docker Logging with Fluent Bit and Elasticsearch

Latest versions of Docker comes with a logging layer feature which allows to define specific drivers that can handle the Container applications logs, specifically the ones that are send to the standard output (stdout) and standard error (stderr) interfaces.

Starting from Docker v1.8, it provides a Fluentd Logging Driver which implements the Forward protocol. Fluent Bit have native support for this protocol, so it can be used as a lightweight log collector. On this article we will demonstrate how to collect Docker logs with Fluent Bit and aggregate them back to a Elasticsearch database.

Docker Logs

Every message than a containerized application writes to stdout or stderr interface, is packaged and associated with some metadata:

The records comes with the container name, container ID, interface among others. Our goal is to connect Docker logging driver with Fluent Bit so then we can send the logs to Elasticsearch:


In order to follow this tutorial, make sure that you have an updated Linux system, check that Docker is installed and verify that Elasticsearch service is up and running. For more details about how to accomplish this, check their official documentation:

Fluent Bit Setup

Fluent Bit as a log collector have two main components: inputs and outputs. The inputs defines from where the data must be collected and the output where it should go.

We assume Fluent Bit or TD Agent Bit is installed in your system, if not, please refer to the official documentation for Installation steps.

Create a new Fluent Bit configuration file called docker\to_es.conf_ and add the the following content:

    Flush        5
    Daemon       Off
    Log_Level    debug

    Name   forward
    Port   24224

    Name  es
    Match *
    Port  9200
    Index fluentbit
    Type  docker

The configuration above says the core will try to flush the records every 5 seconds. It will listen for Forward messages on TCP port 24224 and deliver them to a Elasticsearch service located on host and TCP port 9200. By default it will match all incoming records.

Note that Elasticsearch requires the Index and Type, this can be confusing if you are new to Elasticsearch, if you have used some relational database before, they can be compared to the database and table concepts.

Now launch Fluent Bit with the configuration file just created:

$ fluent-bit -c docker_to_es.conf
[2016/07/07 16:26:24] [debug] [service] loading input: forward
Fluent-Bit v0.8.3
Copyright (C) Treasure Data

[2016/07/07 16:26:24] [ info] starting engine
[2016/07/07 16:26:24] [debug] [in_fw] Listen='' TCP_Port=24224
[2016/07/07 16:26:24] [ info] [in_fw] binding
[2016/07/07 16:26:24] [debug] [es] host= port=9200 index=fluentbit type=docker
[2016/07/07 16:26:24] [debug] [router] input=forward.0 'DYNAMIC TAG'

Leave that terminal open so you can see the debug messages to confirm how the data is moving on.

Elasticsearch Test

Assuming your Elasticsearch service is up, please perform a connection test. Adjust the Host and TCP port as required:

$ curl -X GET
  "name" : "Janus",
  "cluster_name" : "elasticsearch",
  "version" : {
    "number" : "2.3.3",
    "build_hash" : "218bdf10790eef486ff2c41a3df5cfa32dadcfde",
    "build_timestamp" : "2016-05-17T15:40:04Z",
    "build_snapshot" : false,
    "lucene_version" : "5.5.0"
  "tagline" : "You Know, for Search"

Docker Setup

For this test we will use an Ubuntu image for our container, get the image with:

$ docker pull ubuntu

After a few minutes, you should get the ubuntu base image:

$ docker images
ubuntu        latest    0f192147631d    8 days ago      132.7 MB

Running the Docker Container

The following command will run a simple echo command in a container that prints out a message to the standard output (stdout). Pay attention to the extra given options which specify where the logs should go:

$ docker run -t -i --log-driver=fluentd ubuntu echo "Hello Fluent Bit!"

When specifying the fluentd driver, it will assume that will forward the logs to localhost on TCP port 24224. If you want to change that value you can use the --log-opt fluentd-address=host:port option

Query Elasticsearch

After five seconds you will be able to check the records in your Elasticsearch database, do the check with the following command:

$ curl -X GET ''
{"took":1,"timed_out":false,"_shards":{"total":5,"successful":5,"failed":0},"hits":{"total":1,"max_score":1.0,"hits":[{"_index":"fluentbit","_type":"docker","_id":"AVXHyv2XrpN9Bg1zuGCD","_score":1.0,"_source":{"date":1467935812,"container_id":"5c4bf54b660ae27245556b9d649d487dd930581ba7d05b59f6674d4e0e737521","container_name":"/sleepy_lalande","source":"stdout","log":"Hello Fluent Bit!\r"}}]}}

The response will be a JSON message with the records found, the above example have the relevant _source field with:

    "date": 1467935812,
    "container_id": "5c4bf54b660ae27245556b9d649d487dd930581ba7d05b59f6674d4e0e737521",
    "container_name": "/sleepy_laland\e",
    "source": "stdout",
    "log": "Hello Fluent Bit!\r"

At this point everything looks pretty good. The next step would be to use Kibana to visualize the logs. Enjoy!