Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak with BYTEA column types with Exposed DSL #478

Closed
leeduan opened this issue Feb 14, 2020 · 6 comments
Closed

Memory leak with BYTEA column types with Exposed DSL #478

leeduan opened this issue Feb 14, 2020 · 6 comments
Assignees

Comments

@leeduan
Copy link

leeduan commented Feb 14, 2020

Using 0.8.3 of pgjdbc-ng with org.jetbrains.exposed:exposed:0.17.7 when querying for rows with a BYTEA column type with a custom exposed ColumnType results in a memory leak that is not released. Using org.postgresql:postgresql:42.2.5 driver does not result in the same issue, since it never seems to map from db as ByteBufInputStream.

Also, using a BlobColumnType by wrapping ByteArray column via a javax.sql.rowset.serial.SerialBlob also does not have a memory leak issue while using pgjdbc-ng.

Here's example code handling the BYTEA column type by conversion from/to a protobuf message:

import com.google.protobuf.Message
import io.netty.buffer.ByteBufInputStream
import org.jetbrains.exposed.sql.ColumnType

class ProtoByteAColumnType<T: Message>(private val message: T): ColumnType() {

    override fun sqlType() = "BYTEA"

    override fun notNullValueToDB(value: Any): Any =
        when {
            message::class.java.isInstance(value) -> message::class.java.cast(value).toByteArray()
            else -> throw IllegalStateException("...")
        }

    override fun valueFromDB(value: Any): Any =
        when (value) {
            is ByteArray -> message.parserForType.parseFrom(value)
            is Message -> value
            is ByteBufInputStream -> value.use { message.parserForType.parseFrom(it.readAllBytes()) }
            else -> throw IllegalStateException("...")
        }

    override fun valueToDB(value: Any?): Any? = value?.let { notNullValueToDB(value) }
}

Notice how we use wrap the InputStream so it is guaranteed to close.

Here's a ByteBuf log that seems to coincide with queries on related table:

2020-02-12 21:38:46.613 | ERROR | i.n.u.ResourceLeakDetector               | PG-JDBC I/O (11)     | MDC:[] | - LEAK: ByteBuf.release() was not called before it's garbage-collected. See http://netty.io/wiki/reference-counted-objects.html for more information.
Recent access records:
Created at:
  io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:349)
  io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:187)
  io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:178)
  io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:139)
  io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:114)
  io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:147)
  io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:682)
  io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:617)
  io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:534)
  io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
  io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906)
  io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
  java.base/java.lang.Thread.run(Unknown Source)

Any idea why this would be the case?

@kdubb
Copy link
Member

kdubb commented Feb 14, 2020

Can you run with -Dio.netty.leakDetectionLevel=advanced & io.netty.leakDetection.targetRecords=10 to amp up the leak detection.

@leeduan
Copy link
Author

leeduan commented Feb 14, 2020

I'll do some more testing with ^^ early next week.

It could also just be using so much memory per query that it never levels off by the time it OOM's. Switching over to blob columns seems to climb a lot slower e.g. starts app at 2.5gb and climbs to 3.9gb but then levels off.

I could get the pod to crash with a max memory allocation of 12gb within 20-30 SQL select statements by uuids in list when using ProtoByteAColumnType.

@leeduan

This comment has been minimized.

@craftmaster2190
Copy link

+1

@craftmaster2190

This comment has been minimized.

@kdubb
Copy link
Member

kdubb commented Apr 10, 2020

Exposed has a leak problem... this isn't related to the driver just that this driver requires the JDBC be followed properly 😐

I've logged an issue here in exposed... JetBrains/Exposed/issues/871

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants