4 min read

Strategies for Rotating Docker Logs

Strategies for Rotating Docker Logs

Docker logs are frequently delivered to stdout or stderr, allowing us to inspect what occurs in the docker context. Logs, on the other hand, are stored at /var/lib/docker/containers/[container-id]/*.log.

There is no log rotation by default, you should be worried.

Docker has a logging drivers list, and the default driver is json-file. While the local driver does not care about rotation, the json-file provides settings such as max-size and max-files, which are enough for most scenarios. However, logrotate can be used in some cases.

Verbose applications consume a lot of disk space, and you should be worried about logs because they might have a big impact on your application.

You can use logrotate or json-file to manage your Docker logs. Let's do some simple tests with these two alternatives.

Using json-file

To begin, we must include json-file in the Docker daemon configuration file. If no log driver is specified, the default should be json-file.

docker info --format '{{.LoggingDriver}}'
# json-file

Even though the default is json-file, we must set rotate options. Then, create a file called /etc/docker/daemon.json.

sudo nano /etc/docker/daemon.json
{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}

These log driver settings will only be applied to new containers; you must recreate your previous containers. Log drivers can be set to each container or docker-compose file.

docker run

docker run \
    --log-driver json-file \
    --log-opt max-size=10m \
    --log-opt max-file=3 \
    -p "2001:2001" \
     traefik/whoami --port 2001 --name foo-logs --verbose

docker-compose with logging

version: '3.9'

services:
  whoami:
    image: traefik/whoami
    command: --port 2001 --name foo-log --verbose
    ports:
      - 2001:2001
    logging:
      driver: json-file
      options:
        max-size: 10m
        max-file: 3  

Using Logrotate

The logrotate is a well-known tool from the Linux Context, meant to facilitate system administrators by compressing, mailing, moving, and deleting files on a daily, weekly, monthly, or as-needed basis.

To start, create a file named docker-container, you will need a root, and to keep things simple, use nano 🙄.

sudo nano /etc/logrotate.d/docker-container
/var/lib/docker/containers/*/*.log {
  rotate 7
  daily
  compress
  size=1M
  missingok
  delaycompress
  copytruncate
}
  • rotate: count times before being removed or mailed;
  • daily: every day, the logs will be rotated;
  • compress: gzip is used to compress older versions of log files;
  • size: log files are rotated only if they exceed 1 megabyte in size;
  • missingok: if a log file is missing, go to the next one without raising an error;
  • delaycompress: postpone compression of the previous log file to the next rotation cycle. This only has an effect when combined with compress;
  • copytruncate: after creating a copy, truncate the original log file in place;

More options can be found at Logrotate docs.

So let's put log rotation to the test.

logrotate -fv /etc/logrotate.d/docker-container

Well done! After rotation, a new compressed file will be created. Logrotate now working for you;

Plus! Ansible

When dealing with a large number of servers, you should use automation to organize and make tasks as productive as possible. I'll provide some easy recipes.

If you prefer, you can clone the repository. https://github.com/williampsena/ansible-recipes.

Using json-file

---
- name: Set up docker logging driver to json-file with rotate
  gather_facts: no
  hosts: all
  become: yes

  vars:
    DAEMON_FILE: /etc/docker/daemon.json

  tasks:
    - name: check if docker daemon file exists
      stat:
        path: "{{ DAEMON_FILE }}"
      register: config_stats

    - block:
        - name: read the docker daemon file
          slurp:
            src: "{{ DAEMON_FILE }}"
          register: daemon_config

        - name: get docker daemon config
          set_fact:
            docker_config: "{{ daemon_config.content | b64decode | from_json }}"

        - name: append log-driver and log-opts
          set_fact:
            docker_config: "{{ docker_config | default([]) | combine({ item.key : item.value }) }}"
          with_items:
            - { key: "log-driver", value: "json-file" }
            - { key: "log-opts", value: { "max-size": "10m", "max-file": "3"} }

        - name: write to docker daemon file
          copy:
            content: "{{ docker_config | to_nice_json }}"
            dest: "{{ DAEMON_FILE }}"
    
      when: config_stats.stat.exists == true
    
    - name: create the docker daemon file
      copy:
        dest: "{{ DAEMON_FILE }}"
        content: |
            {
                "log-driver": "json-file",
                "log-opts": {
                    "max-size": "10m",
                    "max-file": "3"
                }
            }
      when: config_stats.stat.exists == false

    - name: restart docker daemon
      systemd:
        state: restarted
        daemon_reload: yes
        name: docker

Using LogRotate

---
- name: Set up docker log rotation
  gather_facts: no
  hosts: all
  become: yes

  tasks:
    - name: create the log rotate file
      copy:
        dest: /etc/logrotate.d/docker-container
        content: |
            /var/lib/docker/containers/*/*.log {
                rotate 7
                daily
                compress
                size=1M
                missingok
                delaycompress
                copytruncate
            }

    - name: testing docker logrotate
      command: logrotate -fv /etc/logrotate.d/docker-container

No more codes!

Thank you for reading; please leave comments to improve this article. God bless 🙏🏿 you and your kernel 🧠 always!

If you want to learn more about Docker log drives, read on. In the next piece, I discuss Graylog Extended Format (Gelf) utilizing the ELK stack.
Understanding the ELK Stack with Practical Examples
Is a distributed, free, and open search and analytics engine that can handle textual, numerical, geographic, structured, and unstructured data.

References