0

I have this working but was wondering if there is any potential side effects or even a better way to do this. The example below is generic.

I have a docker-compose file with two containers (container_1 and container_2).

container_1 exposes a volume that contains various config files that it uses to run the installed service.

container_2 mounts the volume from container_1 and periodically runs a script that pulls files and updates the config of the service running in container_1.

Every time the configs are updated I want to restart the service in container_1 without having to use cron or some of the other methods I have seen discussed.

My solution:

I put a script on container_1 that checks if the config file has been updated (the file is initially empty and that md5sum is stored in a separate file) and if the file has changed based on md5sum it updates the current hash and kills the process.

In the compose file I have a healthcheck that runs the script periodically and restart is set to always. When the script in container_2 runs and updates the config files in container_1 the monitor_configs.sh the script on container_1 will kill the process of the service and the container will be restarted and reload the configs.

monitor_config.sh

# current_hash contains md5sum of empty file initially
#!/bin/sh

echo "Checking if config has updated"
config_hash=$(md5sum /path/to/config_file)
current_hash=$(cat /path/to/current_file_hash)

if [ "$rules_hash" != "$current_hash" ] 
then
    echo "config has been updated, restarting service"
    md5sum /path/to/config_file > /path/to/current_file_hash
    kill $(pgrep service)
else
    echo "config unchanged"
fi

docker-compose.yml

version: '3.2'
services:
  service_1:
    build:
      context: /path/to/Dockerfile1
    healthcheck:
      test: ["CMD-SHELL", "/usr/bin/monitor_config.sh"]
      interval: 1m30s
      timeout: 10s
      retries: 1
    restart: always
    volumes:
      - type: volume
        source: conf_volume
        target: /etc/dir_from_1

  service_2:
    build:
      context: /path/to/Dockerfile2
    depends_on:
      - service_1
    volumes:
      - type: volume
        source: conf_volume
        target: /etc/dir_from_1

volumes:
  conf_volume:

I know this is not the intended use of healthcheck but it seemed like the cleanest way to get the desired effect while still maintaining only one running process in each container.

I have tried with and without tini in container_1 and it seems to work as expected in both cases.

I plan on extending the interval of the healthcheck to 24 hours as the script in container_2 only runs once a day.

Use case

I'm running Suricata in container_1 and pulledpork in container_2 to update the rules for Suricata. I want to run pulledpork once a day and if the rules have been update, restart Suricata to load the new rules.

4

1 回答 1

0

你可能想看看像confd这样的工具是如何工作的,它将作为你的 container_1 入口点运行。它在前台运行,轮询外部配置源,并在更改后重写容器内的配置文件并重新启动生成的应用程序。

要制作像 confd 这样的您自己的工具,您需要包含您的重启触发器,也许是您的健康监控脚本,然后让 stdin/stdout/stderr 与任何信号一起通过,以便您的重启工具在容器内变得透明。

于 2017-10-10T20:35:14.537 回答