Having a good logging solution for almost any project is crucial. It is much easier to debug application logs. ELK (Elasticsearch / Logstash / Kibana) stack is popular among different platforms and often is a choice for in-house logging solution. Unlike Docker compose or Swarm, with Kubernetes we don’t have a possibility to specify logging driver for each container individually. This means we could set up logging driver on Docker engine level, but it is not a pretty solution. Since all logs are stored as files inside /var/log/containers , we can have an agent which will be deployed as DeamonSet and read those files from each worker and send them to Logstash.

Filebeat agent

For an agent, we will use Filebeat daemon which is a replacement for logstash forwarder. There isn’t official image available, but I created one and it is available here https://hub.docker.com/r/komljen/filebeat .

This is a filebeat configuration file ready for Kubernetes and added to the image:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 filebeat . registry_file : / var / log / containers / filebeat_registry filebeat . prospectors : - input_type : log paths : - "/var/log/containers/*.log" exclude_files : [ 'filebeat.*log' , 'kube.*log' ] symlinks : true json . message_key : log json . keys_under_root : true json . add_error_key : true multiline . pattern : '^\s' multiline . match : after document_type : kube - logs output . logstash : hosts : $ { LOGSTASH_HOSTS } timeout : 15 logging . level : $ { LOG_LEVEL }

From this config, we can see that Filebeat will get all logs from /var/log/containers directory, and it will skip it’s own logs and logs from kube pods, just in case you want those separate.

Also, to override this file, you can create a Kubernetes ConfigMap resource and mount it in container. Instructions for that are here: https://github.com/komljen/docker-filebeat .

Filebeat container will be deployed as DeamonSet , which means it will be running on each worker node. Here is DeamonSet filebeat config:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 cat > filebeat - ds . yaml << EOF apiVersion : extensions / v1beta1 kind : DaemonSet metadata : name : filebeat labels : app : filebeat spec : template : metadata : labels : app : filebeat name : filebeat spec : containers : - name : filebeat image : komljen / filebeat resources : limits : cpu : 50m memory : 50Mi env : - name : LOGSTASH_HOSTS value : logstash : 5000 - name : LOG_LEVEL value : info volumeMounts : - name : varlog mountPath : / var / log / containers - name : varlogpods mountPath : / var / log / pods readOnly : true - name : varlibdockercontainers mountPath : / var / lib / docker / containers readOnly : true terminationGracePeriodSeconds : 30 volumes : - name : varlog hostPath : path : / var / log / containers - name : varlogpods hostPath : path : / var / log / pods - name : varlibdockercontainers hostPath : path : / var / lib / docker / containers EOF

Create Kubernetes resource:

1 kubectl create - f filebeat - ds . yaml -- namespace = default

ELK stack

Because there are a lot of files for ELK stack, you can find all of them here https://github.com/komljen/kube-elk-filebeat .

All images are created from official Elastic images (Alpine based), with small changes in configs and all available on Docker Hub. Clone this repository and create all Kubernetes resources for ELK stack:

1 2 3 git clone https : //github.com/komljen/kube-elk-filebeat cd kube - elk - filebeat kubectl create - f kubefiles / - R -- namespace = default

Kibana should be running on port 30000 and available from any worker node. To configure it, just open web browser and replace index name logstash-* with filebeat-* , choose time-field name and click create. All logs should be visible on Discover menu.

NOTE: This will not work if your Kubernetes cluster is managed by Rancher, because Kubernetes logs will not be available at usual location /var/log/containers .