Este conteúdo não está disponível na sua linguagem... Então se você não entende a linguagem... bem, você pelo ou menos pode apreciar as imagens da postagem, né?
Este conteúdo não está disponível na sua linguagem... Então se você não entende a linguagem... bem, você pelo ou menos pode apreciar as imagens da postagem, né?
Since I stopped using Proxmox on my dedicated servers, I found myself missing my VXLAN network, which allowed me to assign a static IP for my LXC containers/VMs. If I had a database hosted on one of my dedicated servers, an application on another dedicated server could access it without requiring to expose the service to the whole world.
Initially, I tried using Tailscale on the host system and binding the service's ports to the host's Tailscale IP, but this method proved to be complicated and difficult to manage. I had to keep track of which ports were being used and for what service.
However, I discovered a better solution: running Tailscale within a Docker container and making my container use the network of the Tailscale container! This is also called "sidecar containers".
docker-compose.yml
Exampleversion: "3.9"
services:
web:
image: your-container-image-here
network_mode: service:tailscale
depends_on:
tailscale:
condition: service_healthy
tailscale:
hostname: your-container-hostname-here
image: tailscale/tailscale:latest
healthcheck:
test: ["CMD-SHELL", "tailscale status"]
interval: 1s
timeout: 5s
retries: 60
volumes:
- "./tailscale_var_lib:/var/lib"
- "/dev/net/tun:/dev/net/tun"
cap_add:
- net_admin
- sys_module
- net_raw
command: tailscaled
The hostname
on the tailscale
service will be shown in the Tailscale admin panel and will be used as the DNS name of the container.
The healthcheck
on the tailscale
service ensures that the Tailscale container is not ready until it is connected to the Tailscale network.
There are two binds on the tailscale
service:
/tailscale_var_lib:/var/lib
stores the Tailscale state, required to persist the Tailscale machine across container reboots./dev/net/tun:/dev/net/tun
allows the container to create network tunnels.The cap_add
adds capabilities to the tailscale
service. The net_admin
and sys_module
are required and, while net_raw
is not required, it is useful for tailscaled
.
Finally, the depends_on
on the web
service makes it wait until the tailscale
service is healthy to start. This is required to avoid the container starting up before it is connected to the Tailscale network, which can cause network hang issues if your host is also using Tailscale and you initiate a network connection to your Tailnet and the sidecar container connects to the Tailscale network at the same time.
After running the stack with docker compose up
, you will need to docker exec IdOfTheTailscaleContainerHere tailscale up
to connect and authorize the container to your Tailscale network.
Now you will be able to access your web
service via its' Tailscale IP and DNS name, sweet!
You don't need to specify any ports
, unless if you want users to access the service via the machine's external IP. However, if you want to expose ports to the outside world, you need to expose it on the tailscale container, not in your web
container.
version: "3.9"
services:
web:
image: your-container-image-here
network_mode: service:tailscale
depends_on:
tailscale:
condition: service_healthy
tailscale:
hostname: your-container-hostname-here
image: tailscale/tailscale:latest
ports:
- "80:80"
healthcheck:
test: ["CMD-SHELL", "tailscale status"]
interval: 1s
timeout: 5s
retries: 60
volumes:
- "./tailscale_var_lib:/var/lib"
- "/dev/net/tun:/dev/net/tun"
cap_add:
- net_admin
- sys_module
- net_raw
command: tailscaled
Let's suppose you have two dedicated servers, and one of them runs nginx and another runs your web app. You want the nginx container to reverse proxy your web application.
docker-compose.yml
version: "3.9"
services:
nginx:
image: nginx:1.23.3
network_mode: service:tailscale
volumes:
- type: bind
source: nginx
target: /etc/nginx
depends_on:
tailscale:
condition: service_healthy
tailscale:
hostname: nginx
image: tailscale/tailscale:latest
ports:
- "80:80"
healthcheck:
test: ["CMD-SHELL", "tailscale status"]
interval: 1s
timeout: 5s
retries: 60
volumes:
- "./tailscale_var_lib:/var/lib"
- "/dev/net/tun:/dev/net/tun"
cap_add:
- net_admin
- sys_module
- net_raw
command: tailscaled
docker-compose.yml
version: "3.9"
services:
web:
image: ghcr.io/lorittabot/showtime-backend@sha256:4fb1c202962130964193c0c52a394b9038cb1aed1b00c7fcd232e5ba6ba95679
network_mode: service:tailscale
environment:
JAVA_TOOL_OPTIONS: "-verbose:gc -XX:+UnlockExperimentalVMOptions -Xmx2G -Xms2G -XX:+UseG1GC -XX:+AlwaysPreTouch -XX:+ExitOnOutOfMemoryError"
volumes:
- type: bind
source: showtime.conf
target: /showtime.conf
depends_on:
tailscale:
condition: service_healthy
tailscale:
hostname: loritta-showtime-production
image: tailscale/tailscale:latest
healthcheck:
test: ["CMD-SHELL", "tailscale status"]
interval: 1s
timeout: 5s
retries: 60
volumes:
- "./tailscale_var_lib:/var/lib"
- "/dev/net/tun:/dev/net/tun"
cap_add:
- net_admin
- sys_module
- net_raw
command: tailscaled
On your nginx website configuration, you can set up a reverse proxy.
Keep in mind that nginx will fail to load the configuration if you haven't set up the webapp on your Tailnet!
server {
listen 443 ssl;
server_name loritta.website;
location / {
proxy_pass http://loritta-showtime-production.tailnetnamehere.ts.net:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
And that's it! Have fun untangling your network!
Of course, you could use Docker Swarm to enable communication between your services, but this is a way simpler solution that doesn't require to make your services stateless just to use Docker Swarm.