Lately I've noticed that my nginx server is throwing "upstream prematurely closed connection while reading upstream" when reverse proxying my Ktor webapps, and I'm not sure why.

The client (Ktor) fails with "Chunked stream has ended unexpectedly: no chunk size" when nginx throws that error.

Exception in thread "main" java.io.EOFException: Chunked stream has ended unexpectedly: no chunk size
    at io.ktor.http.cio.ChunkedTransferEncodingKt.decodeChunked(ChunkedTransferEncoding.kt:77)
    at io.ktor.http.cio.ChunkedTransferEncodingKt$decodeChunked$3.invokeSuspend(ChunkedTransferEncoding.kt)
    at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
    at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
    at kotlinx.coroutines.internal.LimitedDispatcher.run(LimitedDispatcher.kt:42)
    at kotlinx.coroutines.scheduling.TaskImpl.run(Tasks.kt:95)
    at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:570)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:750)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:677)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:664)

The error happens randomly, and it only seems to affect big (1MB+) requests... And here's the rabbit hole that I went down to track the bug and figure out a solution.


If you have a dedicated server with OVHcloud, you can purchase additional IPs, also known as "Fallback IPs", for your server. Because I have enough resources on my dedicated servers, I wanted to give/rent VPSes for my friends for them to use for their own projects, but I wanted to give them the real VPS experience, with a real external public IP that they can connect and use.

So I figured out how to bind an external IP to your LXD container/LXD VM! Although there are several online tutorials discussing this process, none of them worked for me until I stumbled upon this semi-unrelated OVHcloud guide that helped me go in the right direction.


Even though I stopped using Proxmox on my dedicated servers, there were still some stateful containers that I needed to host that couldn't be hosted via Docker, such as "VPSes" that I give out to my friends for them to host their own stuff.

Enter LXD: A virtual machine and system containers manager developed by Canonical. LXD is included in all Ubuntu Server 20.04 (and newer versions), and can be easily set up by using the lxd init command. Just like how Proxmox can manage LXC containers, LXD can also manage LXC containers. Despite their similar names, LXD is not a "successor" to LXC; rather, it is a management tool for LXC containers. They do know that this is very confusing.

Keep in mind that LXD does not provide a GUI like Proxmox. If you prefer managing your containers through a GUI, you may find LXD less appealing. But for me? I rarely used Proxmox's GUI anyway and always managed my containers via the terminal.

Peter Shaw has already written an excellent tutorial on this topic, and his tutorial rocks! But I wanted to write my own tutorial with my own findings and discoveries, such as how to fix network issues after migrating the container, since that was left out from his tutorial because "it is a little beyond the scope of this article, that’s a topic for another post."

The source server is running Proxmox 7.1-12, the target server is running Ubuntu Server 22.04. The LXC container we plan to migrate is running Ubuntu 22.04.


Since I stopped using Proxmox on my dedicated servers, I found myself missing my VXLAN network, which allowed me to assign a static IP for my LXC containers/VMs. If I had a database hosted on one of my dedicated servers, an application on another dedicated server could access it without requiring to expose the service to the whole world.

Initially, I tried using Tailscale on the host system and binding the service's ports to the host's Tailscale IP, but this method proved to be complicated and difficult to manage. I had to keep track of which ports were being used and for what service.

However, I discovered a better solution: running Tailscale within a Docker container and making my container use the network of the Tailscale container! This is also called "sidecar containers".



If you are managing your Docker containers via systemd services, you may get annoyed that your container logs are logged to syslog too, which can churn through your entire disk space.

Here's how to disable syslog forwarding for a specific systemd service:

cd /etc/systemd
cp journald.conf [email protected]
nano [email protected]

In the [email protected] configuration, change

ForwardToSyslog=off

Then, in your service's configuration file (/etc/systemd/system/service-here.service)...

[Service]
...
LogNamespace=noisy

And that's it! Now, if you want to look at your application logs, you need to use journalctl --namespace noisy -xe -u YOUR_SERVICE_HERE, if you don't include the --namespace, it will only show the service's startup/shutdown information.

journalctl --namespace noisy -xe -u powercms

This tutorial is based off hraban's answer, however their tutorial is how to disable logging to journald and, because I didn't find any other tutorial about how to disable syslog forwarding for specific services, I've decided to make one myself.