This content is not available in your language... So if you don't understand the language... well, at least you can appreciate the pictures of the post, right?
Lately I've noticed that my nginx server is throwing "upstream prematurely closed connection while reading upstream" when reverse proxying my Ktor webapps, and I'm not sure why.
The client (Ktor) fails with "Chunked stream has ended unexpectedly: no chunk size" when nginx throws that error.
Exception in thread "main" java.io.EOFException: Chunked stream has ended unexpectedly: no chunk size
at io.ktor.http.cio.ChunkedTransferEncodingKt.decodeChunked(ChunkedTransferEncoding.kt:77)
at io.ktor.http.cio.ChunkedTransferEncodingKt$decodeChunked$3.invokeSuspend(ChunkedTransferEncoding.kt)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
at kotlinx.coroutines.internal.LimitedDispatcher.run(LimitedDispatcher.kt:42)
at kotlinx.coroutines.scheduling.TaskImpl.run(Tasks.kt:95)
at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:570)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:750)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:677)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:664)
The error happens randomly, and it only seems to affect big (1MB+) requests... And here's the rabbit hole that I went down to track the bug and figure out a solution.
If you have a dedicated server with OVHcloud, you can purchase additional IPs, also known as "Fallback IPs", for your server. Because I have enough resources on my dedicated servers, I wanted to give/rent VPSes for my friends for them to use for their own projects, but I wanted to give them the real VPS experience, with a real external public IP that they can connect and use.
So I figured out how to bind an external IP to your LXD container/LXD VM! Although there are several online tutorials discussing this process, none of them worked for me until I stumbled upon this semi-unrelated OVHcloud guide that helped me go in the right direction.
Even though I stopped using Proxmox on my dedicated servers, there were still some stateful containers that I needed to host that couldn't be hosted via Docker, such as "VPSes" that I give out to my friends for them to host their own stuff.
Enter LXD: A virtual machine and system containers manager developed by Canonical. LXD is included in all Ubuntu Server 20.04 (and newer versions), and can be easily set up by using the lxd init
command. Just like how Proxmox can manage LXC containers, LXD can also manage LXC containers. Despite their similar names, LXD is not a "successor" to LXC; rather, it is a management tool for LXC containers. They do know that this is very confusing.
Keep in mind that LXD does not provide a GUI like Proxmox. If you prefer managing your containers through a GUI, you may find LXD less appealing. But for me? I rarely used Proxmox's GUI anyway and always managed my containers via the terminal.
Peter Shaw has already written an excellent tutorial on this topic, and his tutorial rocks! But I wanted to write my own tutorial with my own findings and discoveries, such as how to fix network issues after migrating the container, since that was left out from his tutorial because "it is a little beyond the scope of this article, that’s a topic for another post."
The source server is running Proxmox 7.1-12, the target server is running Ubuntu Server 22.04. The LXC container we plan to migrate is running Ubuntu 22.04.
Since I stopped using Proxmox on my dedicated servers, I found myself missing my VXLAN network, which allowed me to assign a static IP for my LXC containers/VMs. If I had a database hosted on one of my dedicated servers, an application on another dedicated server could access it without requiring to expose the service to the whole world.
Initially, I tried using Tailscale on the host system and binding the service's ports to the host's Tailscale IP, but this method proved to be complicated and difficult to manage. I had to keep track of which ports were being used and for what service.
However, I discovered a better solution: running Tailscale within a Docker container and making my container use the network of the Tailscale container! This is also called "sidecar containers".
If you are managing your Docker containers via systemd services, you may get annoyed that your container logs are logged to syslog too, which can churn through your entire disk space.
Here's how to disable syslog forwarding for a specific systemd service:
cd /etc/systemd
cp journald.conf [email protected]
nano [email protected]
In the [email protected]
configuration, change
ForwardToSyslog=off
Then, in your service's configuration file (/etc/systemd/system/service-here.service
)...
[Service]
...
LogNamespace=noisy
And that's it! Now, if you want to look at your application logs, you need to use journalctl --namespace noisy -xe -u YOUR_SERVICE_HERE
, if you don't include the --namespace
, it will only show the service's startup/shutdown information.
journalctl --namespace noisy -xe -u powercms
This tutorial is based off hraban's answer, however their tutorial is how to disable logging to journald and, because I didn't find any other tutorial about how to disable syslog forwarding for specific services, I've decided to make one myself.
...if you aren't a data center selling VPSes for your clients, or if your workload isn't tailored for VMs.
Everyone uses Docker nowadays, but Proxmox doesn't natively support Docker. But you do have three different solutions on how to run Docker in it.
fuse-overlayfs
in the container, and some people have reported that this solution can cause the Proxmox host to lock up due to deadlocks.I always thought that this was a super wtf move, "why wouldn't they support Docker? Everyone uses it nowadays!"
And recently, after using Proxmox since 2017... I understood why.
When your Java app crashes due to OutOfMemoryError
, you probably want to automatically restart your application. After all, what's the point of keeping the application running if a OutOfMemoryError
was thrown, since in that case you probably have a memory leak, so trying to catch the error and continue executing the app is unviable.
To do this, you can add -XX:+ExitOnOutOfMemoryError
to your startup flags, this will cause the JVM to automatically exit when a OutOfMemoryError
is thrown, sweet!
However, you may also want to create a heap dump when a OutOfMemoryError
is thrown, to analyze the heap dump later to figure out what triggered the error. In that case, use -XX:+ExitOnOutOfMemoryError -XX:+HeapDumpOnOutOfMemoryError
, this will create a heap dump on OutOfMemoryError
and then exit.
You can change the heap dump folder with -XX:HeapDumpPath
, so, if you want to store your heap dumps in the /tmp/dump
folder, use -XX:HeapDumpPath=/tmp/dump
!
Here's how you would use it when starting your application.
java -Xmx1G -Xms1G -XX:+ExitOnOutOfMemoryError -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/dump -jar ...
This content is not available in your language... So if you don't understand the language... well, at least you can appreciate the pictures of the post, right?
Se você está com sede, tomar água do mar só lhe deixará com mais sede.
Você pode tentar tomar mais água do mar, tentar buscar a água do mar em outras fontes, tentar requintar a água do mar...
E ela pode até aliviar a sede por alguns instantes, mas jamais saciará a sua sede.
Você almeja saciar a sua sede, desejando que esse sentimento alguma hora passará, e que, ao continuar a tomar água do mar, magicamente o seu problema de sede acabará...
Mas todas as suas tentativas são jogadas em vão, pois água do mar jamais te saciará...
A água do mar não é substituto para o que você realmente almeja, para o que realmente deseja. Mas preso em sua zona de conforto, você acaba não enxergando o motivo que te faz tomar água do mar, ao invés da água que realmente te saciará.
Você deseja quebrar esse loop infinito, explodir e alcançar os seus sonhos e tornar o imaginário em realidade.
Ir contra aos que dizem que é normal tomar água do mar, pessoas cegas pois não querem sair da zona de conforto, mas que, no fundo, possuem os mesmos problemas que você.
Mas para isso, você precisa ter a coragem de dar o primeiro passo...
Java's ScheduledExecutorService
has a nifty scheduleAtFixedRate
API, which allows you to schedule a Runnable
to be executed in a recurring manner.
Mimicking scheduleAtFixedRate
seems easy, after all, isn't it just a while (true)
loop that invokes your task every once in a while?
GlobalScope.launch {
while (true) {
println("Hello World!! Loritta is so cute :3")
delay(5_000)
}
}
And it seems to work fine, it does print Hello World!! Loritta is so cute :3
every 5s! And you may be wondering: What's wrong with it?
Recently, I've noticed that my Minecraft server's MOTD was suspiciously switching to my subservers' MOTD if I connected to my subserver and then quit. This was more noticeable if there was a network issue that caused the client to disconnect, due to Minecraft not automatically refreshing the server list.
But how? This is impossible! The Status Response Packet can only be sent if the user is in Status
state, not in Play
state!
Or perhaps, it ain't impossible since Server Data Packet's introduction in Minecraft 1.19.
Why it exists? While I'm not sure, I think it is related to Minecraft's new reporting system, because it is used to show the infamous "Chat messages can't be verified" toast.
Since BungeeCord dd3f820, the packet is now parsed and blocked on BungeeCord's side, so this is already fixed in newer BungeeCord/Waterfall versions, yay!