This content is not available in your language... So if you don't understand the language... well, at least you can appreciate the pictures of the post, right?
I have been dabbling with Linux and, as a Kotlin developer, it surprised me how fast compiling Kotlin on Linux is compared to Windows. At the same time, I heard about the Windows 11 Dev Drive which, supposedly, is faster than NTFS for software development. Microsoft says that Dev Drive is faster because it uses ReFS and because Microsoft Defender's Antivirus defaults to "performance mode".
So I've decided to build a Kotlin project that uses Kotlin/JVM and Kotlin/JS using ./gradlew clean build
:
And the results are...
I don't know how Dev Drive can be this slow, considering that Microsoft is targeting developers with this feature! My Dev Drive is even set up to use a partition for itself on my main NVMe SSD disk instead of using a virtual hard drive to try to extract the "maximum performance", and even then the performance is worse than NTFS.
But all hope is not lost, because Microsoft advertisement about the Dev Drive may not be 100% truthful. You can extract more performance from the Dev Drive by adding the entire disk as a exclusion in Windows Defender.
Isn't that incredibly stupid? After all, wasn't THE POINT of the Dev Drive to not need to worry about anti-virus and things like that? Oh well, after figuring that out, I've also learned that you can disable Windows Defender's filter entirely on all dev drives by using fsutil devdrv enable /disallowAv
, which is cleaner than adding the entire disk to Windows Defender.
After doing that, here are the new results:
disallowAv
): 22sDon't forget that you need to change the GRADLE_USER_HOME
to the Dev Drive, and I think you can go even further beyond by installing IntelliJ IDEA on the Dev Drive too.
While WSL2 has somewhat similar performance to Linux, IntelliJ IDEA does not seem to play well with projects hosted on a WSL2 drive if the project uses Kotlin Multiplatform, complaining about Not a valid absolute path
related to KMP dependencies. (probably could be fixed by changing the .kotlin
folder)
Lately I've noticed that my nginx server is throwing "upstream prematurely closed connection while reading upstream" when reverse proxying my Ktor webapps, and I'm not sure why.
The client (Ktor) fails with "Chunked stream has ended unexpectedly: no chunk size" when nginx throws that error.
Exception in thread "main" java.io.EOFException: Chunked stream has ended unexpectedly: no chunk size
at io.ktor.http.cio.ChunkedTransferEncodingKt.decodeChunked(ChunkedTransferEncoding.kt:77)
at io.ktor.http.cio.ChunkedTransferEncodingKt$decodeChunked$3.invokeSuspend(ChunkedTransferEncoding.kt)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
at kotlinx.coroutines.internal.LimitedDispatcher.run(LimitedDispatcher.kt:42)
at kotlinx.coroutines.scheduling.TaskImpl.run(Tasks.kt:95)
at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:570)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:750)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:677)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:664)
The error happens randomly, and it only seems to affect big (1MB+) requests... And here's the rabbit hole that I went down to track the bug and figure out a solution.
Java's ScheduledExecutorService
has a nifty scheduleAtFixedRate
API, which allows you to schedule a Runnable
to be executed in a recurring manner.
Mimicking scheduleAtFixedRate
seems easy, after all, isn't it just a while (true)
loop that invokes your task every once in a while?
GlobalScope.launch {
while (true) {
println("Hello World!! Loritta is so cute :3")
delay(5_000)
}
}
And it seems to work fine, it does print Hello World!! Loritta is so cute :3
every 5s! And you may be wondering: What's wrong with it?
This content is not available in your language... So if you don't understand the language... well, at least you can appreciate the pictures of the post, right?
No /r/brdev, /u/sock_templar compartilhou um endpoint dos Correios que permite você rastrear pacotes dos Correios.
curl https://proxyapp.correios.com.br/v1/sro-rastro/CodigoDeRastreioAqui
Ela mostra os eventos sobre o pacote, parecido com os eventos que você recebe ao rastrear um pacote pelo website dos Correios, e inclusive esse endpoint é usado em várias libraries que existem pela internet.
Mas sabia que existe outro endpoint de rastreio dos Correios, que mostra mais informações sobre o pacote sendo rastreado, como também permite você rastrear vários pacotes em um único request?
Trying to figure out how to list files from a Java Resources folder is hard, there are tons of solutions on StackOverflow, however most of them are weird hacks and they only work when executing your app via your IDE, or when executing your app via the command line, not both.
Joop Eggen's answer is awesome, however it can only do one of two things:
So here's an example (Kotlin, but it should be easy to migrate it to Java) that allows you to have both: Reading the resources content when running from a IDE or via the command line!
val uri = MainApp::class.java.getResource("/locales/").toURI()
val dirPath = try {
Paths.get(uri)
} catch (e: FileSystemNotFoundException) {
// If this is thrown, then it means that we are running the JAR directly (example: not from an IDE)
val env = mutableMapOf<String, String>()
FileSystems.newFileSystem(uri, env).getPath("/locales/")
}
Files.list(dirPath).forEach {
println(it.fileName)
if (it.fileName.toString().endsWith("txt")) {
println("Result:")
println(Files.readString(it))
}
}
StackOverflow Post: https://stackoverflow.com/a/67839914/7271796
I needed to reload my network interface on the machine where my PostgreSQL database is hosted and, after doing that, my thread got forever blocked on the getNotifications(...)
call.
"Loritta PostgreSQL Notification Listener" #261 daemon prio=5 os_prio=0 cpu=48.89ms elapsed=62372.91s tid=0x00007f45f806a460 nid=0xf08b5 runnable [0x00007f45d6dfc000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.Net.poll([email protected]/Native Method)
at sun.nio.ch.NioSocketImpl.park([email protected]/NioSocketImpl.java:181)
at sun.nio.ch.NioSocketImpl.timedRead([email protected]/NioSocketImpl.java:285)
at sun.nio.ch.NioSocketImpl.implRead([email protected]/NioSocketImpl.java:309)
at sun.nio.ch.NioSocketImpl.read([email protected]/NioSocketImpl.java:350)
at sun.nio.ch.NioSocketImpl$1.read([email protected]/NioSocketImpl.java:803)
at java.net.Socket$SocketInputStream.read([email protected]/Socket.java:966)
at sun.security.ssl.SSLSocketInputRecord.read([email protected]/SSLSocketInputRecord.java:478)
at sun.security.ssl.SSLSocketInputRecord.readHeader([email protected]/SSLSocketInputRecord.java:472)
at sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket([email protected]/SSLSocketInputRecord.java:70)
at sun.security.ssl.SSLSocketImpl.readApplicationRecord([email protected]/SSLSocketImpl.java:1455)
at sun.security.ssl.SSLSocketImpl$AppInputStream.read([email protected]/SSLSocketImpl.java:1059)
at org.postgresql.core.VisibleBufferedInputStream.readMore(VisibleBufferedInputStream.java:161)
at org.postgresql.core.VisibleBufferedInputStream.ensureBytes(VisibleBufferedInputStream.java:128)
at org.postgresql.core.VisibleBufferedInputStream.ensureBytes(VisibleBufferedInputStream.java:113)
at org.postgresql.core.VisibleBufferedInputStream.read(VisibleBufferedInputStream.java:73)
at org.postgresql.core.PGStream.receiveChar(PGStream.java:453)
at org.postgresql.core.v3.QueryExecutorImpl.processNotifies(QueryExecutorImpl.java:789)
- locked <0x0000000621293410> (a org.postgresql.core.v3.QueryExecutorImpl)
at org.postgresql.jdbc.PgConnection.getNotifications(PgConnection.java:1107)
at net.perfectdreams.loritta.cinnamon.pudding.utils.PostgreSQLNotificationListener.run(PostgreSQLNotificationListener.kt:29)
at java.lang.Thread.run([email protected]/Thread.java:833)
So I went out and tried figuring out if this could be reproduced, and found out that someone had already reported this issue, but it was closed due to "lack of feedback". Anyhow, here's my investigations and explanations to anyone else wondering why their PostgreSQL getNotifications
call is not receiving new notifications, even tho it is blocked on the getNotifications
call!
Exposed's exposed-java-time
(and exposed-kotlin-datetime
) module has an issue: In PostgreSQL it uses TIMESTAMP WITHOUT TIME ZONE
and always saves the timestamp using the system's current time zone (ZoneId.systemDefault()
).
So, if you are running your application in two different time zones, the inserted result of Instant.now()
will be different, even if the code was executed at the same nanosecond!
I got bit by this bug in Loritta, where the legacy application was using the America/Sao_Paulo
time zone while the newer slash commands web server was using UTC
. This caused inconsistencies in the database where transactions inserted by the legacy application were older than transactions inserted by the web server!
While you can workaround the issue by using TimeZone.setDefault(TimeZone.getTimeZone("UTC"))
in your code, this isn't a great solution. Besides, it is recommended that you use TIMESTAMP WITH TIME ZONE
, even if you are using a java.time.Instant
!
Thankfully Exposed is very extendable, allowing you to support features that it doesn't support out of the box! So I submitted my TIMESTAMP WITH TIME ZONE
implementation to my ExposedPowerUtils repository, with other nifty Exposed utilities and extensions.
object ActionLog : LongIdTable() {
// Instead of using timestamp, use "timestampWithTimeZone"
// This returns a "Instant"
val timestamp = timestampWithTimeZone("timestamp")
}
Check out the ExposedPowerUtils repository, include the postgres-java-time
in your project, and have fun! Or if you prefer, just copy the JavaTimestampWithTimeZoneColumnType
file to your project.
Keep in mind that Exposed won't change the column type automatically if the table is already created, you can change it by ALTER TABLE sonhostransactionslog ALTER COLUMN timestamp TYPE TIMESTAMP WITH TIME ZONE;
. In my experience, if your current TIMESTAMP WITHOUT TIME ZONE
are in UTC, you shouldn't have any issues when migrating to a TIMESTAMP WITH TIME ZONE
!