AppSmith on EKS OutOfMemoryError

Hello,
we use AppSmith’s latest version deployed on EKS.
For last weeks out backoffice internal-server is crashing almost in every day, we get this error in the logs

Caused by: java.lang.OutOfMemoryError: Cannot reserve 4194304 bytes of direct buffer memory (allocated: 143119703, limit: 146014208)
	at java.base/java.nio.Bits.reserveMemory(Unknown Source)

	at java.base/java.nio.DirectByteBuffer.<init>(Unknown Source)

	at java.base/java.nio.ByteBuffer.allocateDirect(Unknown Source)

	at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:701)

	at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:676)

	at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:215)

	at io.netty.buffer.PoolArena.tcacheAllocateNormal(PoolArena.java:197)

	at io.netty.buffer.PoolArena.allocate(PoolArena.java:139)

	at io.netty.buffer.PoolArena.allocate(PoolArena.java:129)

	at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:396)

	at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:188)

	at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:179)

	at io.netty.channel.unix.PreferredDirectByteBufAllocator.ioBuffer(PreferredDirectByteBufAllocator.java:53)

	at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:120)

	at io.netty.channel.epoll.EpollRecvByteAllocatorHandle.allocate(EpollRecvByteAllocatorHandle.java:75)

	at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:785)

	at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe$1.run(AbstractEpollChannel.java:425)

	at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174)

	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:167)

	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)

	at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:403)

	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)

	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)

	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)

	at java.base/java.lang.Thread.run(Unknown Source)

After restart everything works ok, is there something we should look into how we implemented or should we increase the java heam memory size?

Best regards,
Vitalie.

Hello Vitalie! Could you please share your system specs and send us the full server logs to help us investigate the issue? You can grab the logs by running docker logs -f appsmith. Or you could follow the steps from this guide on how to get the logs.

You can also try to set this variable in stacks/configuration/docker.env file, then restart Appsmith to increase the memory that Appsmith backend server is allowed to make use of:

APPSMITH_JAVA_ARGS=-Xmx2048M

Hello @Amelia ,
thank you for your reply, here’re the logs if today’s crash

[2023-04-02 18:00:04,125] userEmail=kedari.rao@ectusa.net, sessionId=f9812404-997d-442c-b103-85ac3af056aa, thread=nioEventLoopGroup-3-2, requestId=238d56debeb01f138bd26a833ac1d2d5 - 

org.springframework.dao.InvalidDataAccessResourceUsageException: Unexpected exception

	at org.springframework.data.mongodb.core.MongoExceptionTranslator.translateExceptionIfPossible(MongoExceptionTranslator.java:95)

	Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException: 

Error has been observed at the following site(s):

	*__checkpoint ⇢ Handler com.appsmith.server.controllers.PageController#getPageView(String, String) [DispatcherHandler]

Original Stack Trace:

		at org.springframework.data.mongodb.core.MongoExceptionTranslator.translateExceptionIfPossible(MongoExceptionTranslator.java:95)

		at org.springframework.data.mongodb.core.ReactiveMongoTemplate.potentiallyConvertRuntimeException(ReactiveMongoTemplate.java:2574)

		at org.springframework.data.mongodb.core.ReactiveMongoTemplate.lambda$translateException$93(ReactiveMongoTemplate.java:2557)

		at reactor.core.publisher.Flux.lambda$onErrorMap$27(Flux.java:7088)

		at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:94)

		at com.appsmith.server.configurations.MDCConfig$MdcContextLifter.onError(MDCConfig.java:58)

		at reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onError(MonoFlatMapMany.java:255)

		at com.appsmith.server.configurations.MDCConfig$MdcContextLifter.onError(MDCConfig.java:58)

		at reactor.core.publisher.FluxConcatMapNoPrefetch$FluxConcatMapNoPrefetchSubscriber.maybeOnError(FluxConcatMapNoPrefetch.java:326)

		at reactor.core.publisher.FluxConcatMapNoPrefetch$FluxConcatMapNoPrefetchSubscriber.onError(FluxConcatMapNoPrefetch.java:220)

		at com.appsmith.server.configurations.MDCConfig$MdcContextLifter.onError(MDCConfig.java:58)

		at com.appsmith.server.configurations.MDCConfig$MdcContextLifter.onError(MDCConfig.java:58)

		at reactor.core.publisher.FluxCreate$BaseSink.error(FluxCreate.java:474)

		at reactor.core.publisher.FluxCreate$BufferAsyncSink.drain(FluxCreate.java:802)

		at reactor.core.publisher.FluxCreate$BufferAsyncSink.error(FluxCreate.java:747)

		at reactor.core.publisher.FluxCreate$SerializedFluxSink.drainLoop(FluxCreate.java:237)

		at reactor.core.publisher.FluxCreate$SerializedFluxSink.drain(FluxCreate.java:213)

		at reactor.core.publisher.FluxCreate$SerializedFluxSink.error(FluxCreate.java:189)

		at reactor.core.publisher.LambdaMonoSubscriber.doError(LambdaMonoSubscriber.java:155)

		at reactor.core.publisher.LambdaMonoSubscriber.onError(LambdaMonoSubscriber.java:150)

		at com.appsmith.server.configurations.MDCConfig$MdcContextLifter.onError(MDCConfig.java:58)

		at reactor.core.publisher.FluxMap$MapSubscriber.onError(FluxMap.java:134)

		at com.appsmith.server.configurations.MDCConfig$MdcContextLifter.onError(MDCConfig.java:58)

		at reactor.core.publisher.MonoNext$NextSubscriber.onError(MonoNext.java:93)

		at reactor.core.publisher.MonoNext$NextSubscriber.onError(MonoNext.java:93)

		at com.appsmith.server.configurations.MDCConfig$MdcContextLifter.onError(MDCConfig.java:58)

		at reactor.core.publisher.MonoFlatMap$FlatMapMain.secondError(MonoFlatMap.java:241)

		at reactor.core.publisher.MonoFlatMap$FlatMapInner.onError(MonoFlatMap.java:315)

		at com.appsmith.server.configurations.MDCConfig$MdcContextLifter.onError(MDCConfig.java:58)

		at reactor.core.publisher.MonoPeekTerminal$MonoTerminalPeekSubscriber.onError(MonoPeekTerminal.java:258)

		at com.appsmith.server.configurations.MDCConfig$MdcContextLifter.onError(MDCConfig.java:58)

		at reactor.core.publisher.MonoCreate$DefaultMonoSink.error(MonoCreate.java:201)

		at com.mongodb.reactivestreams.client.internal.MongoOperationPublisher.lambda$sinkToCallback$31(MongoOperationPublisher.java:573)

		at com.mongodb.reactivestreams.client.internal.OperationExecutorImpl.lambda$execute$2(OperationExecutorImpl.java:94)

		at com.mongodb.internal.async.ErrorHandlingResultCallback.onResult(ErrorHandlingResultCallback.java:46)

		at com.mongodb.internal.async.function.AsyncCallbackSupplier.lambda$whenComplete$1(AsyncCallbackSupplier.java:97)

		at com.mongodb.internal.async.function.RetryingAsyncCallbackSupplier$RetryingCallback.onResult(RetryingAsyncCallbackSupplier.java:111)

		at com.mongodb.internal.async.ErrorHandlingResultCallback.onResult(ErrorHandlingResultCallback.java:46)

		at com.mongodb.internal.async.function.AsyncCallbackSupplier.lambda$whenComplete$1(AsyncCallbackSupplier.java:97)

		at com.mongodb.internal.async.ErrorHandlingResultCallback.onResult(ErrorHandlingResultCallback.java:46)

		at com.mongodb.internal.async.function.AsyncCallbackSupplier.lambda$whenComplete$1(AsyncCallbackSupplier.java:97)

		at com.mongodb.internal.operation.FindOperation$1.onResult(FindOperation.java:376)

		at com.mongodb.internal.operation.CommandOperationHelper.lambda$transformingReadCallback$10(CommandOperationHelper.java:323)

		at com.mongodb.internal.async.ErrorHandlingResultCallback.onResult(ErrorHandlingResultCallback.java:46)

		at com.mongodb.internal.connection.DefaultServer$DefaultServerProtocolExecutor$1.onResult(DefaultServer.java:238)

		at com.mongodb.internal.async.ErrorHandlingResultCallback.onResult(ErrorHandlingResultCallback.java:46)

		at com.mongodb.internal.connection.CommandProtocolImpl$1.onResult(CommandProtocolImpl.java:82)

		at com.mongodb.internal.connection.DefaultConnectionPool$PooledConnection$1.onResult(DefaultConnectionPool.java:683)

		at com.mongodb.internal.connection.UsageTrackingInternalConnection$2.onResult(UsageTrackingInternalConnection.java:159)

		at com.mongodb.internal.async.ErrorHandlingResultCallback.onResult(ErrorHandlingResultCallback.java:46)

		at com.mongodb.internal.connection.InternalStreamConnection$2$1.onResult(InternalStreamConnection.java:506)

		at com.mongodb.internal.connection.InternalStreamConnection$2$1.onResult(InternalStreamConnection.java:501)

		at com.mongodb.internal.connection.InternalStreamConnection$MessageHeaderCallback$MessageCallback.onResult(InternalStreamConnection.java:798)

		at com.mongodb.internal.connection.InternalStreamConnection$MessageHeaderCallback$MessageCallback.onResult(InternalStreamConnection.java:788)

		at com.mongodb.internal.connection.InternalStreamConnection$5.failed(InternalStreamConnection.java:654)

		at com.mongodb.connection.netty.NettyStream.readAsync(NettyStream.java:316)

		at com.mongodb.connection.netty.NettyStream.handleReadResponse(NettyStream.java:347)

		at com.mongodb.connection.netty.NettyStream.access$1100(NettyStream.java:105)

		at com.mongodb.connection.netty.NettyStream$InboundBufferHandler.exceptionCaught(NettyStream.java:429)

		at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:346)

		at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:325)

		at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:317)

		at io.netty.channel.DefaultChannelPipeline$HeadContext.exceptionCaught(DefaultChannelPipeline.java:1377)

		at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:346)

		at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:325)

		at io.netty.channel.DefaultChannelPipeline.fireExceptionCaught(DefaultChannelPipeline.java:907)

		at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:125)

		at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:177)

		at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)

		at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)

		at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)

		at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)

		at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)

		at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)

		at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)

		at java.base/java.lang.Thread.run(Unknown Source)

Caused by: com.mongodb.MongoInternalException: Unexpected exception

	at com.mongodb.internal.connection.InternalStreamConnection.translateReadException(InternalStreamConnection.java:711)

	at com.mongodb.internal.connection.InternalStreamConnection.access$800(InternalStreamConnection.java:89)

	at com.mongodb.internal.connection.InternalStreamConnection$5.failed(InternalStreamConnection.java:654)

	at com.mongodb.connection.netty.NettyStream.readAsync(NettyStream.java:316)

	at com.mongodb.connection.netty.NettyStream.handleReadResponse(NettyStream.java:347)

	at com.mongodb.connection.netty.NettyStream.access$1100(NettyStream.java:105)

	at com.mongodb.connection.netty.NettyStream$InboundBufferHandler.exceptionCaught(NettyStream.java:429)

	at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:346)

	at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:325)

	at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:317)

	at io.netty.channel.DefaultChannelPipeline$HeadContext.exceptionCaught(DefaultChannelPipeline.java:1377)

	at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:346)

	at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:325)

	at io.netty.channel.DefaultChannelPipeline.fireExceptionCaught(DefaultChannelPipeline.java:907)

	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:125)

	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:177)

	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)

	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)

	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)

	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)

	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)

	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)

	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)

	at java.base/java.lang.Thread.run(Unknown Source)

Caused by: java.lang.OutOfMemoryError: Cannot reserve 4194304 bytes of direct buffer memory (allocated: 143219755, limit: 146014208)

	at java.base/java.nio.Bits.reserveMemory(Unknown Source)

	at java.base/java.nio.DirectByteBuffer.<init>(Unknown Source)

	at java.base/java.nio.ByteBuffer.allocateDirect(Unknown Source)

	at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:701)

	at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:676)

	at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:215)

	at io.netty.buffer.PoolArena.tcacheAllocateNormal(PoolArena.java:197)

	at io.netty.buffer.PoolArena.allocate(PoolArena.java:139)

	at io.netty.buffer.PoolArena.allocate(PoolArena.java:129)

	at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:396)

	at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:188)

	at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:179)

	at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:140)

	at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:120)

	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:150)

	... 8 common frames omitted

[2023-04-02 18:00:05,294] userEmail=kedari.rao@ectusa.net, sessionId=f9812404-997d-442c-b103-85ac3af056aa, thread=reactor-http-epoll-4, requestId=7257634a64d42903c3134bc97bb2c105 - 

reactor.netty.ReactorNetty$InternalNettyException: java.lang.OutOfMemoryError: Cannot reserve 4194304 bytes of direct buffer memory (allocated: 143219755, limit: 146014208)

	Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException: 

Error has been observed at the following site(s):

	*__checkpoint ⇢ Handler com.appsmith.server.controllers.ActionController#executeAction(Flux, String, String) [DispatcherHandler]

Original Stack Trace:

Caused by: java.lang.OutOfMemoryError: Cannot reserve 4194304 bytes of direct buffer memory (allocated: 143219755, limit: 146014208)

	at java.base/java.nio.Bits.reserveMemory(Unknown Source)

	at java.base/java.nio.DirectByteBuffer.<init>(Unknown Source)

	at java.base/java.nio.ByteBuffer.allocateDirect(Unknown Source)

	at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:701)

	at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:676)

	at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:215)

	at io.netty.buffer.PoolArena.tcacheAllocateNormal(PoolArena.java:197)

	at io.netty.buffer.PoolArena.allocate(PoolArena.java:139)

	at io.netty.buffer.PoolArena.allocate(PoolArena.java:129)

	at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:396)

	at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:188)

	at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:179)

	at io.netty.channel.unix.PreferredDirectByteBufAllocator.ioBuffer(PreferredDirectByteBufAllocator.java:53)

	at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:120)

	at io.netty.channel.epoll.EpollRecvByteAllocatorHandle.allocate(EpollRecvByteAllocatorHandle.java:75)

	at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:785)

	at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe$1.run(AbstractEpollChannel.java:425)

	at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174)

	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:167)

	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)

	at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:403)

	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)

	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)

	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)

	at java.base/java.lang.Thread.run(Unknown Source)

[2023-04-02 18:00:05,352] userEmail=kedari.rao@ectusa.net, sessionId=f9812404-997d-442c-b103-85ac3af056aa, thread=reactor-http-epoll-4, requestId=7257634a64d42903c3134bc97bb2c105 - Operator called default onErrorDropped

reactor.netty.ReactorNetty$InternalNettyException: java.lang.OutOfMemoryError: Cannot reserve 4194304 bytes of direct buffer memory (allocated: 143219755, limit: 146014208)

	Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException: 

Error has been observed at the following site(s):

	*__checkpoint ⇢ Handler com.appsmith.server.controllers.ActionController#executeAction(Flux, String, String) [DispatcherHandler]

Original Stack Trace:

Caused by: java.lang.OutOfMemoryError: Cannot reserve 4194304 bytes of direct buffer memory (allocated: 143219755, limit: 146014208)

	at java.base/java.nio.Bits.reserveMemory(Unknown Source)

	at java.base/java.nio.DirectByteBuffer.<init>(Unknown Source)

	at java.base/java.nio.ByteBuffer.allocateDirect(Unknown Source)

	at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:701)

	at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:676)

	at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:215)

	at io.netty.buffer.PoolArena.tcacheAllocateNormal(PoolArena.java:197)

	at io.netty.buffer.PoolArena.allocate(PoolArena.java:139)

	at io.netty.buffer.PoolArena.allocate(PoolArena.java:129)

	at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:396)

	at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:188)

	at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:179)

	at io.netty.channel.unix.PreferredDirectByteBufAllocator.ioBuffer(PreferredDirectByteBufAllocator.java:53)

	at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:120)

	at io.netty.channel.epoll.EpollRecvByteAllocatorHandle.allocate(EpollRecvByteAllocatorHandle.java:75)

	at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:785)

	at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe$1.run(AbstractEpollChannel.java:425)

	at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174)

	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:167)

	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)

	at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:403)

	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)

	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)

	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)

	at java.base/java.lang.Thread.run(Unknown Source)

We use ApsSmith in EKS, should I provide APPSMITH_JAVA_ARGS in the config file used by deployment or directly specify as env in deployment?

Best regards,
Vitalie.

Thanks. We are investigating this. Could you also please share your system specs?

Hello Amelia,

we are deploying it in the EKS cluster for Apsmith UI and BE dedicated a separate node on “Amazon Linux 2 amd64” OS t3.medium (4 Gib Ram and 2 CPU);

Btw, yesterday I added APPSMITH_JAVA_ARGS=-Xmx2048M for our BO internal server.

Thank you in advance.

Best regards,
Vitalie.

Please share the following information to help us debug the issue:

  1. What is your Helm chart version? We currently only support 3.0.0. Please ensure you are using that one.
  2. Send us the values.yaml file that you are using.

@VitalieSvet, please send us the required information so our team can debug the issue.

Hello Amelia,
I don’t have the values.yml file, we installed it 2 years ago I think at that time there was no helm support. What I can share are K8 definition files:
backoffice-backend

apiVersion: v1
kind: Service
metadata:
  name: backoffice-backend-service
  namespace: backoffice
  labels:
    app: backoffice-backend-service
spec:
  selector:
    app: backoffice-internal-server
  type: NodePort
  ports:
  - port: 8080
    targetPort: 8080

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: backoffice-internal-server
  name: backoffice-internal-server
  namespace: backoffice
spec:
  replicas: 1
  selector:
    matchLabels:
        app: backoffice-internal-server
  template:
    metadata:
      labels:
        app: backoffice-internal-server
    spec:
      containers:
      - envFrom:
          - configMapRef:
              name: backoffice-config
          - configMapRef:
              name: backoffice-encryption-config
        image: appsmith/appsmith-server:latest
        name: backoffice-internal-server
        resources:
            limits:
              cpu: 300m
              memory: 600M
            requests:
              cpu: 100m
              memory: 300M
        ports:
          - containerPort: 8080
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: app-type
                    operator: In
                    values:
                      - system    

backoffice-ingress.yaml

    apiVersion: networking.k8s.io/v1beta1
    kind: Ingress
    metadata:
      name: backoffice-ingress
      namespace: backoffice
      annotations:
        kubernetes.io/ingress.class: "nginx"
    spec:

      backend:
        serviceName: "backoffice-editor"
        servicePort: 80
      rules:
      - host: 
        http:
          paths:
          - path: /api
            pathType: Prefix
            backend:
              serviceName: backoffice-backend-service
              servicePort: 8080
          - path: /api
            pathType: Prefix
            backend:
              serviceName: backoffice-backend-service
              servicePort: 8080    
          - path: /oauth2
            pathType: Prefix
            backend:
              serviceName: backoffice-backend-service
              servicePort: 8080
          - path: /login
            pathType: Prefix
            backend:
              serviceName: backoffice-backend-service
              servicePort: 8080
          - path: /static
            pathType: Prefix
            backend:
              serviceName: backoffice-editor
              servicePort: 80
          - path: /
            pathType: Prefix
            backend:
              serviceName: backoffice-editor
              servicePort: 80

backoffice-mongo.yaml

apiVersion: v1
kind: Service
metadata:
  name: backoffice-mongo-service
  namespace: backoffice
  labels:
    name: mongo
spec:
  type: NodePort
  ports:
    - port: 27017
      targetPort: 27017
      protocol: TCP
  selector:
    role: mongo
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: backoffice-mongo-statefulset
  namespace: backoffice
spec:
  selector:
    matchLabels:
      role: mongo
  serviceName: "backoffice-mongo-service"
  replicas: 1
  template:
    metadata:
      labels:
        role: mongo
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: mongo
        image: mongo:4.4.6
        ports:
          - containerPort: 27017
        volumeMounts:
          - name: mongo-persistent-storage
            mountPath: /data/db
        envFrom:
          - configMapRef:
              name: backoffice-mongo-config
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: app-type
                    operator: In
                    values:
                      - system        
  volumeClaimTemplates:
  - metadata:
      name: mongo-persistent-storage
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 10Gi

backoffice-redis.yaml

apiVersion: v1
kind: Service
metadata:
  name: backoffice-redis-service
  namespace: backoffice
  labels:
    name: redis
spec:
  type: NodePort
  ports:
    - port: 6379
      targetPort: 6379
  selector:
    role: redis
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: backoffice-redis-statefulset
  namespace: backoffice
spec:
  selector:
    matchLabels:
      role: redis
  serviceName: "backoffice-redis-service"
  replicas: 1
  template:
    metadata:
      labels:
        role: redis
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: redis
        image: redis
        ports:
          - containerPort: 6379
        volumeMounts:
          - name: redis-persistent-storage
            mountPath: /data
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: app-type
                    operator: In
                    values:
                      - system      
  volumeClaimTemplates:
  - metadata:
      name: redis-persistent-storage
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 5Gi

frontend.yaml

apiVersion: v1
kind: Service
metadata:
  name: backoffice-editor
  namespace: backoffice
spec:
  selector:
    app: backoffice-editor
  ports:
  - port: 80
    targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backoffice-editor
  namespace: backoffice
spec:
  selector:
    matchLabels:
      app: backoffice-editor
  replicas: 1
  template:
    metadata:
      labels:
        app: backoffice-editor
    spec:
      containers:
      - name: nginx
        image: appsmith/appsmith-editor
        ports:
          - containerPort: 80
        envFrom:
          - configMapRef:
              name: backoffice-config
        volumeMounts:
          - name: backoffice-nginx-config-template
            mountPath: /nginx.conf.template
            subPath: nginx.conf.template
        resources:
            limits:
              cpu: 50m
              memory: 80M
            requests:
              cpu: 30m
              memory: 30M    
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: app-type
                    operator: In
                    values:
                      - system      
      volumes:
      - name: backoffice-nginx-config-template
        configMap:
          name: backoffice-nginx-config-template

backoffice-configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: backoffice-config
  namespace: backoffice
data:
  APPSMITH_MAIL_ENABLED: "false"
  # APPSMITH_MAIL_FROM: ""
  # APPSMITH_REPLY_TO: ""
  # APPSMITH_MAIL_HOST: ""
  # APPSMITH_MAIL_PORT: ""
  # APPSMITH_MAIL_SMTP_TLS_ENABLED: ""
  # APPSMITH_MAIL_USERNAME: ""
  # APPSMITH_MAIL_PASSWORD: ""
  # APPSMITH_MAIL_SMTP_AUTH: ""
  # APPSMITH_OAUTH2_GOOGLE_CLIENT_ID:  ""
  # APPSMITH_OAUTH2_GOOGLE_CLIENT_SECRET: ""
  # APPSMITH_OAUTH2_GITHUB_CLIENT_ID: ""
  # APPSMITH_OAUTH2_GITHUB_CLIENT_SECRET:  ""
  APPSMITH_GOOGLE_MAPS_API_KEY: ""
  APPSMITH_REDIS_URL: redis://backoffice-redis-service:6379
  APPSMITH_MONGODB_URI: mongodb://backoffice-admin:backoffice-password@backoffice-mongo-service:27017/backoffice?retryWrites=true&authSource=admin
  APPSMITH_DISABLE_TELEMETRY: "true"
  APPSMITH_RECAPTCHA_SITE_KEY: ""
  APPSMITH_RECAPTCHA_SECRET_KEY: ""
  APPSMITH_RECAPTCHA_ENABLED: "false"

backoffice-encryption-configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: backoffice-encryption-config
  namespace: backoffice
data:
  APPSMITH_ENCRYPTION_PASSWORD: somevalue
  APPSMITH_ENCRYPTION_SALT: somevaluee

backoffice-mongo-configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: backoffice-mongo-config
  namespace: backoffice
data:
  MONGO_INITDB_DATABASE: backoffice
  MONGO_INITDB_ROOT_USERNAME: backoffice-admin
  MONGO_INITDB_ROOT_PASSWORD: backoffice-password

backoffice-nginx-configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: backoffice-nginx-config-template
  namespace: backoffice
data:
  nginx.conf.template: "
  server {
    listen 80;
    client_max_body_size 100m;

    gzip on;

    root /var/www/appsmith;

    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header X-Forwarded-Host $host;

    location / {
        try_files $uri /index.html =404;
        alias /var/www/appsmith/;
        sub_filter __APPSMITH_SENTRY_DSN__ '${APPSMITH_SENTRY_DSN}';
        sub_filter __APPSMITH_SMART_LOOK_ID__ '${APPSMITH_SMART_LOOK_ID}';
        sub_filter __APPSMITH_OAUTH2_GOOGLE_CLIENT_ID__ '${APPSMITH_OAUTH2_GOOGLE_CLIENT_ID}';
        sub_filter __APPSMITH_OAUTH2_GITHUB_CLIENT_ID__ '${APPSMITH_OAUTH2_GITHUB_CLIENT_ID}';
        sub_filter __APPSMITH_MARKETPLACE_ENABLED__ '${APPSMITH_MARKETPLACE_ENABLED}';
        sub_filter __APPSMITH_SEGMENT_KEY__ '${APPSMITH_SEGMENT_KEY}';
        sub_filter __APPSMITH_OPTIMIZELY_KEY__ '${APPSMITH_OPTIMIZELY_KEY}';
        sub_filter __APPSMITH_ALGOLIA_API_ID__ '${APPSMITH_ALGOLIA_API_ID}';
        sub_filter __APPSMITH_ALGOLIA_SEARCH_INDEX_NAME__ '${APPSMITH_ALGOLIA_SEARCH_INDEX_NAME}';
        sub_filter __APPSMITH_ALGOLIA_API_KEY__ '${APPSMITH_ALGOLIA_API_KEY}';
        sub_filter __APPSMITH_CLIENT_LOG_LEVEL__ '${APPSMITH_CLIENT_LOG_LEVEL}';
        sub_filter __APPSMITH_GOOGLE_MAPS_API_KEY__ '${APPSMITH_GOOGLE_MAPS_API_KEY}';
        sub_filter __APPSMITH_TNC_PP__ '${APPSMITH_TNC_PP}';
        sub_filter __APPSMITH_VERSION_ID__ '${APPSMITH_VERSION_ID}';
        sub_filter __APPSMITH_VERSION_RELEASE_DATE__ '${APPSMITH_VERSION_RELEASE_DATE}';
        sub_filter __APPSMITH_INTERCOM_APP_ID__ '${APPSMITH_INTERCOM_APP_ID}';
        sub_filter __APPSMITH_MAIL_ENABLED__ '${APPSMITH_MAIL_ENABLED}';
        sub_filter __APPSMITH_DISABLE_TELEMETRY__ '${APPSMITH_DISABLE_TELEMETRY}';
        sub_filter __APPSMITH_CLOUD_SERVICES_BASE_URL__ '${APPSMITH_CLOUD_SERVICES_BASE_URL}';
        sub_filter __APPSMITH_RECAPTCHA_SITE_KEY__ '${APPSMITH_RECAPTCHA_SITE_KEY}';
        sub_filter __APPSMITH_RECAPTCHA_SECRET_KEY__ '${APPSMITH_RECAPTCHA_SECRET_KEY}';
        sub_filter __APPSMITH_RECAPTCHA_ENABLED__ '${APPSMITH_RECAPTCHA_ENABLED}';
    }
      
    location /f {
       proxy_pass https://cdn.optimizely.com/;
    }
}"

Could you please migrate to the Helm charts based installation? The slim containers have been deprecated, and several features are not even supported on them.
Please follow the steps from our migration guide and let us know if you face any issues.

Hello @Amelia

thank you, we installed another AppSmith instance in backoffice-app namespace, but when we try to reimport the dump it cannot connect to the DB, I see in the logs that the connection is totally different from what we have in EKS.
Error on Importing:

kubectl exec -it appsmith-0 -n backoffice-app -- appsmithctl import_db
Defaulted container "appsmith" out of: appsmith, redis-init-container (init), mongo-init-container (init)
Importing database
stop backend & rts application before import database
backend: stopped
rts: stopped
/bin/sh: 1: --quiet: not found
MongoDB shell version v5.0.15
connecting to: mongodb://appsmith-mongodb-0.appsmith-mongodb.backoffice-app.svc.cluster.local:27017,appsmith-mongodb-1.appsmith-mongodb.backoffice-app.svc.cluster.local:27017/appsmith?compressors=disabled&gssapiServiceName=mongodb&retryWrites=true&ssl=true
{"t":{"$date":"2023-04-09T11:02:02.174Z"},"s":"I",  "c":"NETWORK",  "id":5490002, "ctx":"thread4","msg":"Started a new thread for the timer service"}
Error: couldn't connect to server appsmith-mongodb-1.appsmith-mongodb.backoffice-app.svc.cluster.local:27017, connection attempt failed: SocketException: stream truncated :
connect@src/mongo/shell/mongo.js:372:17
@(connect):2:6
exception: connect failed
exiting with code 1
import_database  ....
2023-04-09T11:02:02.257+0000	The --db and --collection flags are deprecated for this use-case; please use --nsInclude instead, i.e. with --nsInclude=${DATABASE}.${COLLECTION}
2023-04-09T11:02:02.266+0000	preparing collections to restore from
2023-04-09T11:02:02.405+0000	0 document(s) restored successfully. 0 document(s) failed to restore.
import_database done
start backend & rts application after import database
backend: started
rts: started

The mongo generated URL in EKS
“APPSMITH_MONGODB_URI”: “mongodb+srv://@@@:@@@@@appsmith-mongodb.backoffice-app.svc.cluster.local/appsmith?retryWrites=true&authSource=admin&ssl=false”,

Thank you

Hello @VitalieSvet, Can I ask what your mongo version is? Also, can I ask what steps you followed in installing the new Appsmith instance?

Hello @Olawale
I fallowed this steps to migrate out AppSmith instance

Currently we have in our Multi-Container infrastructure this version 4.4.6

Thank you

Hi there!
The current version of Appsmith requires that your MongoDB version should be >=5. Can you upgrade your MongoDB version to match this?

To upgrade your MongoDB to version 5.0 for Appsmith, follow the steps detailed in the MongoDB official documentation to Upgrade a Replica Set to 5.0 (https://www.mongodb.com/docs/manual/release-notes/5.0-upgrade-replica-set/). If you’re using an older version of MongoDB and want to connect Appsmith to an external MongoDB instance, it’s important to upgrade to version 5.0 before doing so. You can find more information on connecting Appsmith to an external MongoDB instance in the Appsmith documentation (External MongoDB and Redis | Appsmith). Remember to backup your data before upgrading. You can follow these steps to backup your data:

  1. Dump the mongo data from the old VM via the command:
mongodump --uri "mongodb://<rootUsername>:<rootPassword>@localhost:27017/appsmith" --out=/tmp/appsmith-mongo-dump
  1. Copy the dump file to the new VM.
  2. Open the file encryption.env and note the values in this file. The values here are important for Appsmith to be able to read sensitive information in the new installation.

Source: Backup and Restore (Migrate Applications | Appsmith)

Thank you for the explanation @Olawale , to make sure I should

  1. upgrade the DB in the old infrastructure
  2. export it
  3. import data in new infrastructure

Am I right ?

Follow the doc Ola shared and you’ll be fine.

@pranav @Olawale Sorry guys I am a little bit confused. Here’s the current situation

  1. We were suggested to migrate from multi-container setup
  2. During migration I was getting a connectivity error when I tried to import dumped DB

In the old infrastructure I see we use 4.4.6 mongo db, how will upgrading this resolve the connectivity issue when I import dump in the new infrastructure?

Okay, let’s backup a little.

  1. Were you able to migrate your setup from multi-container to single container (keep the version of your instance same. i.e. if you were on v1.8.1 on multi-container, use the same version when you migrate to single container setup)
  2. Once this is done, if you are using external mongo upgrade it to v5 (if it’s embedded mongo then we upgrade this version for you) and upgrade your new instance to v1.9.2 (checkpoint release)
  3. From here onwards, you can upgrade to the latest no problem.

If you are stuck at any of the steps mentioned above, let me know where you are stuck and share the logs.

PS: The discussion got side tracked as Ola assumed that your problem is related to mongo v5. But let’s take this step by step.

Cheers!

Hello @pranav thank you for you response,
I’ve just checked the multi-container version and the helm version for AppSmith is the same

new Version of AppSmith

I installed the new AppSmith version without any issues,
problem appears when I try to import the MongoDb dump from old version

I think it doe not take the right URL for mongoDB, if you can point me where I can setup it for import it might solve the issue

Best regards,
Vitalie.

Hey, I see the the command is not the same as listed in this doc - Migrate From Multi-Container Setup (k8s) | Appsmith

Also, can you run these commands and share the logs of all the commands.

Hello @pranav if you can see it’s the same, in my case I put specific “namespace”

kubectl exec -it appsmith-0 -n backoffice-app – appsmithctl import_db
vs
kubectl exec -it appsmith-0 – appsmithctl import_db

It’s because I installed Appsmith on backoffice-app namespace
helm install appsmith appsmith/appsmith --values values.yml -n backoffice-app