Preallocating LXD Virtual Machine memory

Publicado às 21/07/2024 15:50 • #linux #lxd

By default, LXD Virtual Machines do not preallocate their memory up to the limits.memory set. Instead, they allocate up to the maximum memory has needed during its lifetime, but they do not give back memory to the host system after it is freed, ballooning devices notwithstanding.

So, because the memory isn't given back to the host, why not preallocate the memory used by the VM? This way it is easier to reason about if you have enough memory in your system, instead of trying to think about "uuhh I do have 10240 megabytes free, but one of my VMs has a 6144 megabytes limit, and currently they are using 2048 megabytes, so actually we have 6144 megabytes free".

First, you need to shutdown your virtual machine...

lxc stop test-memoryalloc

Now, if you haven't already, let's set the memory limit for the virtual machine...

lxc config set test-memoryalloc limits.memory 8GB

And then we add QEMU's -mem-prealloc parameter to the VM!

lxc config set test-memoryalloc raw.qemu "\-mem-prealloc"

That's it! If you check how much memory the QEMU process for the test-memoryalloc virtual machine is using, you'll see that it will be using ~8GB.

And yes, you need to add the \ before the dash, because if you don't, LXD will complain about Error: unknown shorthand flag: 'm' in -mem-prealloc.

If you want to remove the -mem-prealloc parameter, you can remove it by using lxc config unset.

lxc config unset test-memoryalloc raw.qemu

During a "let's improve SparklyPower to make it the best Survival server ever" hyperfixation phase, I've really wanted to update my server to 1.21. I waited eagerly for the shiny Paper experimental 1.21 builds, to update my server right when it is released. Who cares if it is "experimental", after all, what could possibly go wrong?

Well, that was a big mistake.

When the Paper team said that they are experimental builds that you shouldn't run in production, they mean it. The Paper experimental 1.21 builds do not have all of the optimization patches, nor they are stable. But I'm stupid and decided to ignore the experimental label.

Now I'm stuck with a server that could handle ~80 players with 20 TPS on 1.20.6, while on 1.21 it is struggling to keep 20 TPS with only 50 players online. And that makes me sad because it is hard to just shrug it off and carry on when you know that players are sad/angry because the server is lagging, especially because it was entirely my fault.

<Pantufa> Isn't the solution straightforward? Why not just downgrade the server to 1.20.6 and move on?

Here's the deal: Everyone says that downgrading versions is NOT supported, some even say that it is IMPOSSIBLE to downgrade and that's why you MUST backup your server before updating.


DaVinci Resolve does not support GIFs, which is a bit of a pain when you want to use GIFs in your videos.

Thankfully, you can convert GIFs to transparent mov files with ffmpeg! This way, the GIF transparent background will be a transparent background in the mov file.

ffmpeg -i source.gif -pix_fmt yuva420p -vcodec qtrle target.mov

After converting the file, just import the target.mov video into your DaVinci Resolve project (drag and drop the file into the timeline).

qtrle is the Apple QuickTime RLE codec, and it supports transparency! However, transparency only works when in a mov file, not in a mp4 file.


Earlier this year, Coachella uploaded tons of clips from artist's shows of this year's Coachella to their YouTube channel. One of the clips was Underworld's performing Two Months Off and Dark Train. As a Underworld fan, I loved it!

Sadly, the original video is now private. Why did they do this? I don't know, you also can't reupload the video to YouTube because it gets automatically blocked worldwide by Coachella.

Thankfully, Wayback Machine came in clutch! They have archived the video, so the video hasn't been lost forever to time, yay!

Wayback Machine's servers are a bit slow, so the video may take a while to load. Thanks to this thread for explaining how to access archived YouTube videos via the Wayback Machine!



Lately I've noticed that my nginx server is throwing "upstream prematurely closed connection while reading upstream" when reverse proxying my Ktor webapps, and I'm not sure why.

The client (Ktor) fails with "Chunked stream has ended unexpectedly: no chunk size" when nginx throws that error.

Exception in thread "main" java.io.EOFException: Chunked stream has ended unexpectedly: no chunk size
    at io.ktor.http.cio.ChunkedTransferEncodingKt.decodeChunked(ChunkedTransferEncoding.kt:77)
    at io.ktor.http.cio.ChunkedTransferEncodingKt$decodeChunked$3.invokeSuspend(ChunkedTransferEncoding.kt)
    at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
    at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
    at kotlinx.coroutines.internal.LimitedDispatcher.run(LimitedDispatcher.kt:42)
    at kotlinx.coroutines.scheduling.TaskImpl.run(Tasks.kt:95)
    at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:570)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:750)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:677)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:664)

The error happens randomly, and it only seems to affect big (1MB+) requests... And here's the rabbit hole that I went down to track the bug and figure out a solution.


If you have a dedicated server with OVHcloud, you can purchase additional IPs, also known as "Fallback IPs", for your server. Because I have enough resources on my dedicated servers, I wanted to give/rent VPSes for my friends for them to use for their own projects, but I wanted to give them the real VPS experience, with a real external public IP that they can connect and use.

So I figured out how to bind an external IP to your LXD container/LXD VM! Although there are several online tutorials discussing this process, none of them worked for me until I stumbled upon this semi-unrelated OVHcloud guide that helped me go in the right direction.


Even though I stopped using Proxmox on my dedicated servers, there were still some stateful containers that I needed to host that couldn't be hosted via Docker, such as "VPSes" that I give out to my friends for them to host their own stuff.

Enter LXD: A virtual machine and system containers manager developed by Canonical. LXD is included in all Ubuntu Server 20.04 (and newer versions), and can be easily set up by using the lxd init command. Just like how Proxmox can manage LXC containers, LXD can also manage LXC containers. Despite their similar names, LXD is not a "successor" to LXC; rather, it is a management tool for LXC containers. They do know that this is very confusing.

Keep in mind that LXD does not provide a GUI like Proxmox. If you prefer managing your containers through a GUI, you may find LXD less appealing. But for me? I rarely used Proxmox's GUI anyway and always managed my containers via the terminal.

Peter Shaw has already written an excellent tutorial on this topic, and his tutorial rocks! But I wanted to write my own tutorial with my own findings and discoveries, such as how to fix network issues after migrating the container, since that was left out from his tutorial because "it is a little beyond the scope of this article, that’s a topic for another post."

The source server is running Proxmox 7.1-12, the target server is running Ubuntu Server 22.04. The LXC container we plan to migrate is running Ubuntu 22.04.


Since I stopped using Proxmox on my dedicated servers, I found myself missing my VXLAN network, which allowed me to assign a static IP for my LXC containers/VMs. If I had a database hosted on one of my dedicated servers, an application on another dedicated server could access it without requiring to expose the service to the whole world.

Initially, I tried using Tailscale on the host system and binding the service's ports to the host's Tailscale IP, but this method proved to be complicated and difficult to manage. I had to keep track of which ports were being used and for what service.

However, I discovered a better solution: running Tailscale within a Docker container and making my container use the network of the Tailscale container! This is also called "sidecar containers".



If you are managing your Docker containers via systemd services, you may get annoyed that your container logs are logged to syslog too, which can churn through your entire disk space.

Here's how to disable syslog forwarding for a specific systemd service:

cd /etc/systemd
cp journald.conf [email protected]
nano [email protected]

In the [email protected] configuration, change

ForwardToSyslog=off

Then, in your service's configuration file (/etc/systemd/system/service-here.service)...

[Service]
...
LogNamespace=noisy

And that's it! Now, if you want to look at your application logs, you need to use journalctl --namespace noisy -xe -u YOUR_SERVICE_HERE, if you don't include the --namespace, it will only show the service's startup/shutdown information.

journalctl --namespace noisy -xe -u powercms

This tutorial is based off hraban's answer, however their tutorial is how to disable logging to journald and, because I didn't find any other tutorial about how to disable syslog forwarding for specific services, I've decided to make one myself.


Proxmox isn't for you

Publicado às 20/11/2022 13:00 • #proxmox

...if you aren't a data center selling VPSes for your clients, or if your workload isn't tailored for VMs.

Everyone uses Docker nowadays, but Proxmox doesn't natively support Docker. But you do have three different solutions on how to run Docker in it.

  • Installing a Linux OS in a VM and running Docker in it. This is the recommended solution by Proxmox.
  • Running Docker in a LXC container. This is not recommended, but it does work. However, if you are using ZFS, you need to install fuse-overlayfs in the container, and some people have reported that this solution can cause the Proxmox host to lock up due to deadlocks.
  • Running Docker on the Proxmox host itself. This is super not recommended since you should keep the hypervisor layer "clean".

I always thought that this was a super wtf move, "why wouldn't they support Docker? Everyone uses it nowadays!"

And recently, after using Proxmox since 2017... I understood why.