Este conteúdo não está disponível na sua linguagem... Então se você não entende a linguagem... bem, você pelo ou menos pode apreciar as imagens da postagem, né?
Este conteúdo não está disponível na sua linguagem... Então se você não entende a linguagem... bem, você pelo ou menos pode apreciar as imagens da postagem, né?
If your PostgreSQL instance is randomly going into recovery mode... Have you checked if your disk is full?
2022-04-26 13:22:24.962 UTC [382] LOG: database system was not properly shut down; automatic recovery in progress
In the past, if my PostgreSQL LXC container disk was full, PostgreSQL would throw I/O errors when trying to write new data to it, so when my instance was randomly going into recovery, I was stumped, what is causing this?
Until by accident I noticed that my disk was dangerously close to full, so I've decided to check if the disk being full could be culprit of why it was going into recovery, and, fortunately, it was!
My theory is that this was a change made in newer PostgreSQL versions. If the disk is full, fsync
fails, which causes PostgreSQL to consider that the data is now unreliable and starts restoring the database state from WAL files. Previous PostgreSQL versions were using fsync incorrectly, then changing how it works so, if fsync fails, PostgreSQL will automatically shut down and initiate recovery mode.
Or maybe it is because I migrated my LXC container from a EXT4 partition to ZFS, who knows!
But if your free disk space is suspiciously low after a recover, and you want to be sure that this is what triggered recovery: Create a large file that fits in your free disk space, wait until PostgreSQL tries doing any I/O, and check if PostgreSQL crashes and starts automatic recovery. If yes, then your disk being full is what could have triggered recovery mode!
However, you may be wondering...
If the issue was that there wasn't any free space, how PostgreSQL was able to successfully recover and start up again?
When PostgreSQL finishes its recovery process, a bunch of free space was cleared up, this may be because PostgreSQL was reading data from WAL files, applying them to the tables' data, and then deleting the WAL files, which ends up freeing disk space, and that allows PostgreSQL to continue working until the disk fills up again.
Anyhow, if your PostgreSQL is randomly going into recovery for no apparent reason, don't forget to check your available free disk space!
Este conteúdo não está disponível na sua linguagem... Então se você não entende a linguagem... bem, você pelo ou menos pode apreciar as imagens da postagem, né?
I needed to reload my network interface on the machine where my PostgreSQL database is hosted and, after doing that, my thread got forever blocked on the getNotifications(...)
call.
"Loritta PostgreSQL Notification Listener" #261 daemon prio=5 os_prio=0 cpu=48.89ms elapsed=62372.91s tid=0x00007f45f806a460 nid=0xf08b5 runnable [0x00007f45d6dfc000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.Net.poll([email protected]/Native Method)
at sun.nio.ch.NioSocketImpl.park([email protected]/NioSocketImpl.java:181)
at sun.nio.ch.NioSocketImpl.timedRead([email protected]/NioSocketImpl.java:285)
at sun.nio.ch.NioSocketImpl.implRead([email protected]/NioSocketImpl.java:309)
at sun.nio.ch.NioSocketImpl.read([email protected]/NioSocketImpl.java:350)
at sun.nio.ch.NioSocketImpl$1.read([email protected]/NioSocketImpl.java:803)
at java.net.Socket$SocketInputStream.read([email protected]/Socket.java:966)
at sun.security.ssl.SSLSocketInputRecord.read([email protected]/SSLSocketInputRecord.java:478)
at sun.security.ssl.SSLSocketInputRecord.readHeader([email protected]/SSLSocketInputRecord.java:472)
at sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket([email protected]/SSLSocketInputRecord.java:70)
at sun.security.ssl.SSLSocketImpl.readApplicationRecord([email protected]/SSLSocketImpl.java:1455)
at sun.security.ssl.SSLSocketImpl$AppInputStream.read([email protected]/SSLSocketImpl.java:1059)
at org.postgresql.core.VisibleBufferedInputStream.readMore(VisibleBufferedInputStream.java:161)
at org.postgresql.core.VisibleBufferedInputStream.ensureBytes(VisibleBufferedInputStream.java:128)
at org.postgresql.core.VisibleBufferedInputStream.ensureBytes(VisibleBufferedInputStream.java:113)
at org.postgresql.core.VisibleBufferedInputStream.read(VisibleBufferedInputStream.java:73)
at org.postgresql.core.PGStream.receiveChar(PGStream.java:453)
at org.postgresql.core.v3.QueryExecutorImpl.processNotifies(QueryExecutorImpl.java:789)
- locked <0x0000000621293410> (a org.postgresql.core.v3.QueryExecutorImpl)
at org.postgresql.jdbc.PgConnection.getNotifications(PgConnection.java:1107)
at net.perfectdreams.loritta.cinnamon.pudding.utils.PostgreSQLNotificationListener.run(PostgreSQLNotificationListener.kt:29)
at java.lang.Thread.run([email protected]/Thread.java:833)
So I went out and tried figuring out if this could be reproduced, and found out that someone had already reported this issue, but it was closed due to "lack of feedback". Anyhow, here's my investigations and explanations to anyone else wondering why their PostgreSQL getNotifications
call is not receiving new notifications, even tho it is blocked on the getNotifications
call!