AppSmith on EKS OutOfMemoryError

@pranav to let you know I upgraded our old replica set for MongoDB to 5.0 dumped again the DB and on import we got same connectivity issue
I think the appsmithctl has an issue why it concatenates additional pod name?
instead of appsmith-mongodb.backoffice-app.svc.cluster.local
it uses appsmith-mongodb-1.appsmith-mongodb.backoffice-app.svc.cluster.local

Hi @VitalieSvet , It seems like there may be a mismatch between the MongoDB data source being used by Appsmith and the one you have. Can you provide more information on the specific issue you’re facing? Are you encountering errors when deploying Appsmith, or are you having trouble connecting to the MongoDB data source after Appsmith has been successfully deployed? Please let me know and I’ll do my best to assist you.

I suggest that you try running the following command to check for any issues:
kubectl exec -it appsmith-0 -n backoffice-app bash
Once you are inside the pod, please run the following command:
env | grep -i mongo
This should help us gather more information about the issue you are facing.

Hello @felix-appsmith @Olawale ,

we migrated it to using helm as you suggested and it worked fine for 2-3 days and after that crashed again with the allocation errro message

backend stdout | Caused by: java.lang.OutOfMemoryError: Cannot reserve 4194304 bytes of direct buffer memory (allocated: 898543932, limit: 899678208)
backend stdout |  at java.base/java.nio.Bits.reserveMemory(Bits.java:178)
backend stdout |  at java.base/java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:121)
backend stdout |  at java.base/java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:332)
backend stdout |  at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:701)
backend stdout |  at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:676)
backend stdout |  at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:215)
backend stdout |  at io.netty.buffer.PoolArena.tcacheAllocateNormal(PoolArena.java:197)
backend stdout |  at io.netty.buffer.PoolArena.allocate(PoolArena.java:139)
backend stdout |  at io.netty.buffer.PoolArena.allocate(PoolArena.java:129)
backend stdout |  at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:396)
backend stdout |  at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:188)
backend stdout |  at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:179)
backend stdout |  at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:140)
backend stdout |  at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:120)
backend stdout |  at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:150)
backend stdout |  ... 8 common frames omitted

Can it be related to the queries/load we do?

Hello Vitalie!
Please run the following commands and share the output with us:

kubectl get events -n <namespace>

and

kubectl describe pod <pod_name>

This will give us more details about the pod state and what happened.