-
Notifications
You must be signed in to change notification settings - Fork 22
Docker Swarm
If you want to deploy this stack on a Docker Swarm with multiple nodes or if you want to run replicas of the frontend (clustering), there are several things you have to consider first.
Important: You can only deploy multiple replicas of the frontend services seahub and seahub-media. Deploying replicas of the backend or the database would cause data inconsistency or even data corruption.
In order to make the same volumes available to services running on different nodes, you need an advanced storage solution. This could either be distributed storage like GlusterFS, Ceph or NFS. The volumes are then usually mounted through storage plugins. The repository marcelo-ochoa/docker-volume-plugins contains some good storage plugins for Docker Swarm.
If you have services running on different nodes, which have to communicate to each other, you have to define their network as an overlay network. This will span the network across the whole Swarm.
seafile-net:
driver: overlay
internal: true
If you want to run frontend replicas (clustering), you'll need to enable dnsrr endpoint mode, which is needed for proper load balancing.
To enable load balancing you have to configure the following options:
Set the endpoint mode for the frontend services seahub and seahub-media to dnsrr. This will enable seafile-caddy to see the IPs of all replicas, instead the default virtual IP (VIP) created by the Swarm routing mesh.
deploy:
mode: replicated
replicas: 2
endpoint_mode: dnsrr
Then you have to set the following environment variable for seafile-caddy, which will enable a periodic DNS resolution for the frontend services.
environment:
- SWARM_DNS=true
The load balancer, in this case seafile-caddy, will then create so called sticky sessions, which means that a client connecting with a certain IP will be forwarded to the same service for the time being. Hashing is based on the header X-Forwarded-For
. This is better than client ip based hashing, when you have another reverse proxy in front of seafile-caddy, which is highly recommended. With client ip based hashing seafile-caddy would just forward everything to the same container, as it only sees the IP of the reverse proxy. Instead the X-Forwarder-For header contains the actual client IP.
It is also recommended to use dnsrr mode on the seafile-server, when you run multiple replicas of seahub. This will enable seafile-server to see the actual IPs of the seahub replicas when they connect to it, instead of a single virtual IP for all of them. This will circumvent probable IP:PORT overlaps in the TCP connection between seahub and seafile-server if you run many seahub replicas.
deploy:
endpoint_mode: dnsrr
You can check out this example and use it as a starting point for your Docker Swarm deployment. It is using lucaslorentz/caddy-docker-proxy as the external reverse proxy and the GlusterFS plugin from marcelo-ochoa/docker-volume-plugins. This resembles my personal production setup.
wget https://raw.githubusercontent.com/ggogel/seafile-containerized/master/compose/docker-compose-swarm.yml