Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error using v3.1.1 with Container pre-initialization and sharing #57

Open
Lukeesec opened this issue Feb 4, 2025 · 1 comment
Open
Labels
question Further information is requested

Comments

@Lukeesec
Copy link

Lukeesec commented Feb 4, 2025

Hi,

I am attempting to run a couple apps in a public setting. So this mean't I needed to use "Container pre-initialization and sharing". Apps are setup to not shutdown, to be shared amongst users, scale, etc:

  minimum_seats_available       = 1
  seats_per_container           = 10
  allow_container_re_use        = true
  max_total_instances           = 2
  max_lifetime                  = -1
  hide_navbar_on_main_page_link = true

No shinyproxy login or anything since it's public. Everything runs well till there is maintenance that forces everything to restart, once this occurs ShinyProxy is not able to start the apps back up, and when a user attempts to go to an app in the site it will not work. Here is the error seen in ShinyProxy operator:

2025-01-26T01:22:12.822Z  WARN 1 --- [           main] io.undertow.websockets.jsr               : UT026010: Buffer pool was not set on WebSocketDeploymentInfo, the default pool will be used
2025-01-26T01:22:12.899Z  INFO 1 --- [           main] io.undertow.servlet                      : Initializing Spring embedded WebApplicationContext
2025-01-26T01:22:12.902Z  INFO 1 --- [           main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 7082 ms
2025-01-26T01:22:13.304Z  INFO 1 --- [           main] e.o.c.service.IdentifierService          : ShinyProxy runtimeId:                   gdlm
2025-01-26T01:22:13.509Z  INFO 1 --- [           main] e.o.c.service.IdentifierService          : ShinyProxy instanceID (hash of config): 3e165bd6a7a78e14914370f1ea64ab5d0b4b3ae8
2025-01-26T01:22:13.510Z  INFO 1 --- [           main] e.o.c.service.IdentifierService          : ShinyProxy realmId:                     shinyproxy-public-shinyproxy
2025-01-26T01:22:13.511Z  INFO 1 --- [           main] e.o.c.service.IdentifierService          : ShinyProxy version:                     1737734318344
CUT ---
2025-01-26T01:23:54.177Z  INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend     : [user=null proxyId=a16546b0-24fc-400d-ac5e-a64aa632f6e7 specId=c3332-edia] Kubernetes Pod not ready yet, trying again (24/40)
2025-01-26T01:23:55.194Z  INFO 1 --- [haringScaler-15] e.o.c.b.d.p.ProxySharingScaler           : [specId=oi delegateProxyId=0cfa4531-33c1-4f06-bb82-12321180d9f1 seatId=712cef26-2e53-4905-8fdc-c37c3c655b9a] Created Seat
2025-01-26T01:23:55.194Z  INFO 1 --- [haringScaler-15] e.o.c.b.d.p.ProxySharingScaler           : [specId=oi delegateProxyId=0cfa4531-33c1-4f06-bb82-12321180d9f1 seatId=082cfbe8-00e1-4cf1-a157-9386cdbb9953] Created Seat
2025-01-26T01:23:55.194Z  INFO 1 --- [haringScaler-15] e.o.c.b.d.p.ProxySharingScaler           : [specId=oi delegateProxyId=0cfa4531-33c1-4f06-bb82-12321180d9f1 seatId=1081402b-806f-4183-b747-8ada1feba3cc] Created Seat
2025-01-26T01:23:55.194Z  INFO 1 --- [haringScaler-15] e.o.c.b.d.p.ProxySharingScaler           : [specId=oi delegateProxyId=0cfa4531-33c1-4f06-bb82-12321180d9f1 seatId=1f776a7e-f6a4-43ff-a1dd-42217858665c] Created Seat
2025-01-26T01:23:55.194Z  INFO 1 --- [haringScaler-15] e.o.c.b.d.p.ProxySharingScaler           : [specId=oi delegateProxyId=0cfa4531-33c1-4f06-bb82-12321180d9f1 seatId=69d43d76-53ed-489a-a12c-6e59c0e6704c] Created Seat
2025-01-26T01:23:55.194Z  INFO 1 --- [haringScaler-15] e.o.c.b.d.p.ProxySharingScaler           : [specId=oi delegateProxyId=0cfa4531-33c1-4f06-bb82-12321180d9f1 seatId=5a7a889a-5f55-4fba-92e7-fb62905287b4] Created Seat
2025-01-26T01:23:55.194Z  INFO 1 --- [haringScaler-15] e.o.c.b.d.p.ProxySharingScaler           : [specId=oi delegateProxyId=0cfa4531-33c1-4f06-bb82-12321180d9f1 seatId=51eddb04-ea6d-4765-8a57-dea125408b30] Created Seat
2025-01-26T01:23:55.194Z  INFO 1 --- [haringScaler-15] e.o.c.b.d.p.ProxySharingScaler           : [specId=oi delegateProxyId=0cfa4531-33c1-4f06-bb82-12321180d9f1 seatId=afdce53e-44d7-4176-a952-5338e76c08da] Created Seat
2025-01-26T01:23:55.194Z  INFO 1 --- [haringScaler-15] e.o.c.b.d.p.ProxySharingScaler           : [specId=oi delegateProxyId=0cfa4531-33c1-4f06-bb82-12321180d9f1 seatId=7b959444-04df-425f-85df-4bfcf5af5621] Created Seat
2025-01-26T01:23:55.194Z  INFO 1 --- [haringScaler-15] e.o.c.b.d.p.ProxySharingScaler           : [specId=oi delegateProxyId=0cfa4531-33c1-4f06-bb82-12321180d9f1 seatId=2b6e4f86-4ec2-4e5c-9ef7-51667d1ef617] Created Seat
2025-01-26T01:23:55.660Z  INFO 1 --- [haringScaler-15] e.o.c.b.d.p.ProxySharingScaler           : [specId=oi delegateProxyId=0cfa4531-33c1-4f06-bb82-12321180d9f1] Started DelegateProxy
2025-01-26T01:23:55.984Z  INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend     : [user=null proxyId=85e23122-fd67-4801-8f4a-f1927ade75d8 specId=EXAMPLE-P] Kubernetes Pod not ready yet, trying again (25/40)
2025-01-26T01:23:56.184Z  INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend     : [user=null proxyId=a16546b0-24fc-400d-ac5e-a64aa632f6e7 specId=c3332-edia] Kubernetes Pod not ready yet, trying again (25/40)
2025-01-26T01:23:57.990Z  INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend     : [user=null proxyId=85e23122-fd67-4801-8f4a-f1927ade75d8 specId=EXAMPLE-P] Kubernetes Pod not ready yet, trying again (26/40)
2025-01-26T01:23:58.190Z  INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend     : [user=null proxyId=a16546b0-24fc-400d-ac5e-a64aa632f6e7 specId=c3332-edia] Kubernetes Pod not ready yet, trying again (26/40)
2025-01-26T01:24:00.003Z  INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend     : [user=null proxyId=85e23122-fd67-4801-8f4a-f1927ade75d8 specId=EXAMPLE-P] Kubernetes Pod not ready yet, trying again (27/40)
2025-01-26T01:24:00.196Z  INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend     : [user=null proxyId=a16546b0-24fc-400d-ac5e-a64aa632f6e7 specId=c3332-edia] Kubernetes Pod not ready yet, trying again (27/40)
2025-01-26T01:24:02.009Z  INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend     : [user=null proxyId=85e23122-fd67-4801-8f4a-f1927ade75d8 specId=EXAMPLE-P] Kubernetes Pod not ready yet, trying again (28/40)
2025-01-26T01:24:02.203Z  INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend     : [user=null proxyId=a16546b0-24fc-400d-ac5e-a64aa632f6e7 specId=c3332-edia] Kubernetes Pod not ready yet, trying again (28/40)
2025-01-26T01:24:04.016Z  INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend     : [user=null proxyId=85e23122-fd67-4801-8f4a-f1927ade75d8 specId=EXAMPLE-P] Kubernetes Pod not ready yet, trying again (29/40)
2025-01-26T01:24:04.209Z  INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend     : [user=null proxyId=a16546b0-24fc-400d-ac5e-a64aa632f6e7 specId=c3332-edia] Kubernetes Pod not ready yet, trying again (29/40)
2025-01-26T01:24:05.276Z  INFO 1 --- [xecutorLoop-1-2] i.l.core.protocol.ConnectionWatchdog     : Reconnecting, last destination was redis.public-shinyproxy.svc.cluster.local/10.16.243.69:6379
2025-01-26T01:24:05.277Z  INFO 1 --- [xecutorLoop-1-2] i.l.core.protocol.ConnectionWatchdog     : Reconnecting, last destination was redis.public-shinyproxy.svc.cluster.local/10.16.243.69:6379
2025-01-26T01:24:05.277Z  INFO 1 --- [xecutorLoop-1-1] i.l.core.protocol.ConnectionWatchdog     : Reconnecting, last destination was redis.public-shinyproxy.svc.cluster.local/10.16.243.69:6379
2025-01-26T01:24:05.281Z  WARN 1 --- [ioEventLoop-4-2] i.l.core.protocol.ConnectionWatchdog     : Cannot reconnect to [redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379]: Connection refused: redis.public-shinyproxy.svc.cluster.local/10.16.243.69:6379
2025-01-26T01:24:05.281Z  WARN 1 --- [ioEventLoop-4-1] i.l.core.protocol.ConnectionWatchdog     : Cannot reconnect to [redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379]: Connection refused: redis.public-shinyproxy.svc.cluster.local/10.16.243.69:6379
2025-01-26T01:24:05.282Z  WARN 1 --- [ioEventLoop-4-2] i.l.core.protocol.ConnectionWatchdog     : Cannot reconnect to [redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379]: Connection refused: redis.public-shinyproxy.svc.cluster.local/10.16.243.69:6379
2025-01-26T01:24:06.022Z  INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend     : [user=null proxyId=85e23122-fd67-4801-8f4a-f1927ade75d8 specId=EXAMPLE-P] Kubernetes Pod not ready yet, trying again (30/40)
2025-01-26T01:24:06.216Z  INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend     : [user=null proxyId=a16546b0-24fc-400d-ac5e-a64aa632f6e7 specId=c3332-edia] Kubernetes Pod not ready yet, trying again (30/40)
2025-01-26T01:24:08.030Z  INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend     : [user=null proxyId=85e23122-fd67-4801-8f4a-f1927ade75d8 specId=EXAMPLE-P] Kubernetes Pod not ready yet, trying again (31/40)
2025-01-26T01:24:08.226Z  INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend     : [user=null proxyId=a16546b0-24fc-400d-ac5e-a64aa632f6e7 specId=c3332-edia] Kubernetes Pod not ready yet, trying again (31/40)
2025-01-26T01:24:10.037Z  INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend     : [user=null proxyId=85e23122-fd67-4801-8f4a-f1927ade75d8 specId=EXAMPLE-P] Kubernetes Pod not ready yet, trying again (32/40)
2025-01-26T01:24:10.243Z  INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend     : [user=null proxyId=a16546b0-24fc-400d-ac5e-a64aa632f6e7 specId=c3332-edia] Kubernetes Pod not ready yet, trying again (32/40)
2025-01-26T01:24:12.046Z  INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend     : [user=null proxyId=85e23122-fd67-4801-8f4a-f1927ade75d8 specId=EXAMPLE-P] Kubernetes Pod not ready yet, trying again (33/40)
2025-01-26T01:24:12.249Z  INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend     : [user=null proxyId=a16546b0-24fc-400d-ac5e-a64aa632f6e7 specId=c3332-edia] Kubernetes Pod not ready yet, trying again (33/40)
2025-01-26T01:24:14.051Z  INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend     : [user=null proxyId=85e23122-fd67-4801-8f4a-f1927ade75d8 specId=EXAMPLE-P] Kubernetes Pod not ready yet, trying again (34/40)
2025-01-26T01:24:14.256Z  INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend     : [user=null proxyId=a16546b0-24fc-400d-ac5e-a64aa632f6e7 specId=c3332-edia] Kubernetes Pod not ready yet, trying again (34/40)
2025-01-26T01:24:14.276Z  INFO 1 --- [xecutorLoop-1-2] i.l.core.protocol.ConnectionWatchdog     : Reconnecting, last destination was redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379
2025-01-26T01:24:14.276Z  INFO 1 --- [xecutorLoop-1-1] i.l.core.protocol.ConnectionWatchdog     : Reconnecting, last destination was redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379
2025-01-26T01:24:14.276Z  INFO 1 --- [xecutorLoop-1-2] i.l.core.protocol.ConnectionWatchdog     : Reconnecting, last destination was redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379
2025-01-26T01:24:14.279Z  WARN 1 --- [ioEventLoop-4-1] i.l.core.protocol.ConnectionWatchdog     : Cannot reconnect to [redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379]: Connection refused: redis.public-shinyproxy.svc.cluster.local/10.16.243.69:6379
2025-01-26T01:24:14.279Z  WARN 1 --- [ioEventLoop-4-2] i.l.core.protocol.ConnectionWatchdog     : Cannot reconnect to [redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379]: Connection refused: redis.public-shinyproxy.svc.cluster.local/10.16.243.69:6379
2025-01-26T01:24:14.279Z  WARN 1 --- [ioEventLoop-4-2] i.l.core.protocol.ConnectionWatchdog     : Cannot reconnect to [redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379]: Connection refused: redis.public-shinyproxy.svc.cluster.local/10.16.243.69:6379
2025-01-26T01:24:16.057Z  INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend     : [user=null proxyId=85e23122-fd67-4801-8f4a-f1927ade75d8 specId=EXAMPLE-P] Kubernetes Pod not ready yet, trying again (35/40)
2025-01-26T01:24:16.262Z  INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend     : [user=null proxyId=a16546b0-24fc-400d-ac5e-a64aa632f6e7 specId=c3332-edia] Kubernetes Pod not ready yet, trying again (35/40)
2025-01-26T01:24:18.062Z  INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend     : [user=null proxyId=85e23122-fd67-4801-8f4a-f1927ade75d8 specId=EXAMPLE-P] Kubernetes Pod not ready yet, trying again (36/40)
2025-01-26T01:24:18.282Z  INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend     : [user=null proxyId=a16546b0-24fc-400d-ac5e-a64aa632f6e7 specId=c3332-edia] Kubernetes Pod not ready yet, trying again (36/40)
2025-01-26T01:24:20.071Z  INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend     : [user=null proxyId=85e23122-fd67-4801-8f4a-f1927ade75d8 specId=EXAMPLE-P] Kubernetes Pod not ready yet, trying again (37/40)
2025-01-26T01:24:20.288Z  INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend     : [user=null proxyId=a16546b0-24fc-400d-ac5e-a64aa632f6e7 specId=c3332-edia] Kubernetes Pod not ready yet, trying again (37/40)
2025-01-26T01:24:22.077Z  INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend     : [user=null proxyId=85e23122-fd67-4801-8f4a-f1927ade75d8 specId=EXAMPLE-P] Kubernetes Pod not ready yet, trying again (38/40)
2025-01-26T01:24:22.293Z  INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend     : [user=null proxyId=a16546b0-24fc-400d-ac5e-a64aa632f6e7 specId=c3332-edia] Kubernetes Pod not ready yet, trying again (38/40)
2025-01-26T01:24:22.476Z  INFO 1 --- [xecutorLoop-1-1] i.l.core.protocol.ConnectionWatchdog     : Reconnecting, last destination was redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379
2025-01-26T01:24:22.476Z  INFO 1 --- [xecutorLoop-1-2] i.l.core.protocol.ConnectionWatchdog     : Reconnecting, last destination was redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379
2025-01-26T01:24:22.477Z  INFO 1 --- [xecutorLoop-1-1] i.l.core.protocol.ConnectionWatchdog     : Reconnecting, last destination was redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379
2025-01-26T01:24:22.478Z  WARN 1 --- [ioEventLoop-4-2] i.l.core.protocol.ConnectionWatchdog     : Cannot reconnect to [redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379]: Connection refused: redis.public-shinyproxy.svc.cluster.local/10.16.243.69:6379
2025-01-26T01:24:22.478Z  WARN 1 --- [ioEventLoop-4-1] i.l.core.protocol.ConnectionWatchdog     : Cannot reconnect to [redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379]: Connection refused: redis.public-shinyproxy.svc.cluster.local/10.16.243.69:6379
2025-01-26T01:24:22.478Z  WARN 1 --- [ioEventLoop-4-1] i.l.core.protocol.ConnectionWatchdog     : Cannot reconnect to [redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379]: Connection refused: redis.public-shinyproxy.svc.cluster.local/10.16.243.69:6379
2025-01-26T01:24:24.084Z  INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend     : [user=null proxyId=85e23122-fd67-4801-8f4a-f1927ade75d8 specId=EXAMPLE-P] Kubernetes Pod not ready yet, trying again (39/40)
2025-01-26T01:24:24.094Z ERROR 1 --- [haringScaler-15] e.o.c.b.d.p.ProxySharingScaler           : [delegateProxyId=85e23122-fd67-4801-8f4a-f1927ade75d8 specId=EXAMPLE-P] Failed to start DelegateProxy

eu.openanalytics.containerproxy.ProxyFailedToStartException: Container with index 0 failed to start
        at eu.openanalytics.containerproxy.backend.AbstractContainerBackend.startProxy(AbstractContainerBackend.java:132) ~[containerproxy-1.1.1.jar!/:1.1.1]
        at eu.openanalytics.containerproxy.backend.dispatcher.proxysharing.ProxySharingScaler.lambda$createDelegateProxyJob$5(ProxySharingScaler.java:410) ~[containerproxy-1.1.1.jar!/:1.1.1]
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) ~[na:na]
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[na:na]
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[na:na]
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[na:na]
        at java.base/java.lang.Thread.run(Thread.java:840) ~[na:na]
Caused by: eu.openanalytics.containerproxy.ContainerFailedToStartException: Kubernetes Pod did not start in time
        at eu.openanalytics.containerproxy.backend.kubernetes.KubernetesBackend.startContainer(KubernetesBackend.java:356) ~[containerproxy-1.1.1.jar!/:1.1.1]
        at eu.openanalytics.containerproxy.backend.AbstractContainerBackend.startProxy(AbstractContainerBackend.java:126) ~[containerproxy-1.1.1.jar!/:1.1.1]
        ... 6 common frames omitted

2025-01-26T01:24:24.300Z  INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend     : [user=null proxyId=a16546b0-24fc-400d-ac5e-a64aa632f6e7 specId=c3332-edia] Kubernetes Pod not ready yet, trying again (39/40)
2025-01-26T01:24:24.313Z ERROR 1 --- [haringScaler-15] e.o.c.b.d.p.ProxySharingScaler           : [delegateProxyId=a16546b0-24fc-400d-ac5e-a64aa632f6e7 specId=c3332-edia] Failed to start DelegateProxy

eu.openanalytics.containerproxy.ProxyFailedToStartException: Container with index 0 failed to start
        at eu.openanalytics.containerproxy.backend.AbstractContainerBackend.startProxy(AbstractContainerBackend.java:132) ~[containerproxy-1.1.1.jar!/:1.1.1]
        at eu.openanalytics.containerproxy.backend.dispatcher.proxysharing.ProxySharingScaler.lambda$createDelegateProxyJob$5(ProxySharingScaler.java:410) ~[containerproxy-1.1.1.jar!/:1.1.1]
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) ~[na:na]
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[na:na]
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[na:na]
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[na:na]
        at java.base/java.lang.Thread.run(Thread.java:840) ~[na:na]
Caused by: eu.openanalytics.containerproxy.ContainerFailedToStartException: Kubernetes Pod did not start in time
        at eu.openanalytics.containerproxy.backend.kubernetes.KubernetesBackend.startContainer(KubernetesBackend.java:356) ~[containerproxy-1.1.1.jar!/:1.1.1]
        at eu.openanalytics.containerproxy.backend.AbstractContainerBackend.startProxy(AbstractContainerBackend.java:126) ~[containerproxy-1.1.1.jar!/:1.1.1]
        ... 6 common frames omitted

2025-01-26T01:24:38.876Z  INFO 1 --- [xecutorLoop-1-1] i.l.core.protocol.ConnectionWatchdog     : Reconnecting, last destination was redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379
2025-01-26T01:24:38.876Z  INFO 1 --- [xecutorLoop-1-2] i.l.core.protocol.ConnectionWatchdog     : Reconnecting, last destination was redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379
2025-01-26T01:24:38.876Z  INFO 1 --- [xecutorLoop-1-2] i.l.core.protocol.ConnectionWatchdog     : Reconnecting, last destination was redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379
2025-01-26T01:24:38.877Z  WARN 1 --- [ioEventLoop-4-1] i.l.core.protocol.ConnectionWatchdog     : Cannot reconnect to [redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379]: Connection refused: redis.public-shinyproxy.svc.cluster.local/10.16.243.69:6379
2025-01-26T01:24:38.877Z  WARN 1 --- [ioEventLoop-4-2] i.l.core.protocol.ConnectionWatchdog     : Cannot reconnect to [redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379]: Connection refused: redis.public-shinyproxy.svc.cluster.local/10.16.243.69:6379
2025-01-26T01:24:38.877Z  WARN 1 --- [ioEventLoop-4-2] i.l.core.protocol.ConnectionWatchdog     : Cannot reconnect to [redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379]: Connection refused: redis.public-shinyproxy.svc.cluster.local/10.16.243.69:6379
2025-01-26T01:25:05.264Z  WARN 1 --- [ck-leadership-1] o.s.i.redis.util.RedisLockRegistry       : The UNLINK command has failed (not supported on the Redis server?); falling back to the regular DELETE command: Redis command timed out
2025-01-26T01:25:08.976Z  INFO 1 --- [xecutorLoop-1-1] i.l.core.protocol.ConnectionWatchdog     : Reconnecting, last destination was redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379
2025-01-26T01:25:08.976Z  INFO 1 --- [xecutorLoop-1-2] i.l.core.protocol.ConnectionWatchdog     : Reconnecting, last destination was redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379
2025-01-26T01:25:08.977Z  INFO 1 --- [xecutorLoop-1-2] i.l.core.protocol.ConnectionWatchdog     : Reconnecting, last destination was redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379
2025-01-26T01:25:08.983Z  INFO 1 --- [ioEventLoop-4-1] i.l.core.protocol.ReconnectionHandler    : Reconnected to redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379
2025-01-26T01:25:08.983Z  INFO 1 --- [ioEventLoop-4-2] i.l.core.protocol.ReconnectionHandler    : Reconnected to redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379
2025-01-26T01:25:08.983Z  INFO 1 --- [ioEventLoop-4-1] i.l.core.protocol.ReconnectionHandler    : Reconnected to redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379
2025-01-26T01:25:08.986Z  INFO 1 --- [ck-leadership-1] e.o.c.s.leader.redis.RedisLeaderService  : This server (runtimeId: gdlm) is no longer the leader.
2025-01-26T01:25:08.986Z  INFO 1 --- [GlobalEventLoop] e.o.c.b.d.p.ProxySharingScaler           : [specId=oi] Scale up required, trying to create 1 DelegateProxies
2025-01-26T01:25:08.988Z  INFO 1 --- [GlobalEventLoop] e.o.c.b.d.p.ProxySharingScaler           : [specId=oi delegateProxyId=4fc57de4-aca9-472a-8bdc-d4838efeecf0] Creating DelegateProxy
2025-01-26T01:25:08.988Z ERROR 1 --- [GlobalEventLoop] e.o.c.s.leader.GlobalEventLoopService    : Error while processing event in the GlobalEventLoop Callback[callback=eu.openanalytics.containerproxy.backend.dispatcher.proxysharing.ProxySharingScaler$$Lambda$1659/0x00007ffa01a3f160@3ecb8c0f, onlyIfLeader=true]:

java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@297bb039[Not completed, task = java.util.concurrent.Executors$RunnableAdapter@1c26088e[Wrapped task = eu.openanalytics.containerproxy.backend.dispatcher.proxysharing.ProxySharingScaler$$Lambda$1711/0x00007ffa01a67118@44a3d112]] rejected from java.util.concurrent.ThreadPoolExecutor@5f5da92a[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 2]
        at java.base/java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2065) ~[na:na]
        at java.base/java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:833) ~[na:na]
        at java.base/java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1365) ~[na:na]
        at java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:123) ~[na:na]
        at eu.openanalytics.containerproxy.backend.dispatcher.proxysharing.ProxySharingScaler.scaleUp(ProxySharingScaler.java:364) ~[containerproxy-1.1.1.jar!/:1.1.1]
        at eu.openanalytics.containerproxy.backend.dispatcher.proxysharing.ProxySharingScaler.reconcile(ProxySharingScaler.java:326) ~[containerproxy-1.1.1.jar!/:1.1.1]
        at eu.openanalytics.containerproxy.service.leader.GlobalEventLoopService.lambda$new$0(GlobalEventLoopService.java:54) ~[containerproxy-1.1.1.jar!/:1.1.1]
        at java.base/java.lang.Thread.run(Thread.java:840) ~[na:na]

2025-01-26T01:25:09.043Z  INFO 1 --- [ck-leadership-1] e.o.c.s.leader.redis.RedisLeaderService  : This server (runtimeId: gdlm) is now the leader.
2025-01-26T01:25:09.044Z  INFO 1 --- [GlobalEventLoop] e.o.c.b.d.p.ProxySharingScaler           : [specId=oi delegateProxyId=4fc57de4-aca9-472a-8bdc-d4838efeecf0] Pending DelegateProxy not created by this instance, marking for removal
2025-01-26T01:25:09.049Z  INFO 1 --- [GlobalEventLoop] e.o.c.b.d.p.ProxySharingScaler           : [specId=oi] Scale up required, trying to create 1 DelegateProxies
2025-01-26T01:25:09.049Z  INFO 1 --- [GlobalEventLoop] e.o.c.b.d.p.ProxySharingScaler           : [specId=oi delegateProxyId=f4da310e-f3a7-4a48-a817-42cab79b3457] Creating DelegateProxy
2025-01-26T01:25:09.050Z ERROR 1 --- [GlobalEventLoop] e.o.c.s.leader.GlobalEventLoopService    : Error while processing event in the GlobalEventLoop Callback[callback=eu.openanalytics.containerproxy.backend.dispatcher.proxysharing.ProxySharingScaler$$Lambda$1702/0x00007ffa01a5f630@6fc1f5b6, onlyIfLeader=true]:

java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@6b3947f1[Not completed, task = java.util.concurrent.Executors$RunnableAdapter@2822067b[Wrapped task = eu.openanalytics.containerproxy.backend.dispatcher.proxysharing.ProxySharingScaler$$Lambda$1711/0x00007ffa01a67118@28109a09]] rejected from java.util.concurrent.ThreadPoolExecutor@5f5da92a[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 2]
        at java.base/java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2065) ~[na:na]
        at java.base/java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:833) ~[na:na]
        at java.base/java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1365) ~[na:na]
        at java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:123) ~[na:na]
        at eu.openanalytics.containerproxy.backend.dispatcher.proxysharing.ProxySharingScaler.scaleUp(ProxySharingScaler.java:364) ~[containerproxy-1.1.1.jar!/:1.1.1]
        at eu.openanalytics.containerproxy.backend.dispatcher.proxysharing.ProxySharingScaler.reconcile(ProxySharingScaler.java:326) ~[containerproxy-1.1.1.jar!/:1.1.1]
        at eu.openanalytics.containerproxy.service.leader.GlobalEventLoopService.lambda$new$0(GlobalEventLoopService.java:54) ~[containerproxy-1.1.1.jar!/:1.1.1]
        at java.base/java.lang.Thread.run(Thread.java:840) ~[na:na]

2025-01-26T01:25:09.051Z  INFO 1 --- [GlobalEventLoop] e.o.c.b.d.p.ProxySharingScaler           : [specId=EXAMPLE-P] Scale up required, trying to create 1 DelegateProxies
2025-01-26T01:25:09.052Z  INFO 1 --- [GlobalEventLoop] e.o.c.b.d.p.ProxySharingScaler           : [specId=EXAMPLE-P delegateProxyId=25082613-ea66-40af-9f28-3ae8b6c78508] Creating DelegateProxy
2025-01-26T01:25:09.052Z ERROR 1 --- [GlobalEventLoop] e.o.c.s.leader.GlobalEventLoopService    : Error while processing event in the GlobalEventLoop Callback[callback=eu.openanalytics.containerproxy.backend.dispatcher.proxysharing.ProxySharingScaler$$Lambda$1702/0x00007ffa01a5f630@bcd9e04, onlyIfLeader=true]:

java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@74f398a1[Not completed, task = java.util.concurrent.Executors$RunnableAdapter@432dea95[Wrapped task = eu.openanalytics.containerproxy.backend.dispatcher.proxysharing.ProxySharingScaler$$Lambda$1711/0x00007ffa01a67118@620a2bce]] rejected from java.util.concurrent.ThreadPoolExecutor@14ffafcc[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 2]
        at java.base/java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2065) ~[na:na]
        at java.base/java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:833) ~[na:na]
        at java.base/java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1365) ~[na:na]
        at java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:123) ~[na:na]
        at eu.openanalytics.containerproxy.backend.dispatcher.proxysharing.ProxySharingScaler.scaleUp(ProxySharingScaler.java:364) ~[containerproxy-1.1.1.jar!/:1.1.1]
        at eu.openanalytics.containerproxy.backend.dispatcher.proxysharing.ProxySharingScaler.reconcile(ProxySharingScaler.java:326) ~[containerproxy-1.1.1.jar!/:1.1.1]
        at eu.openanalytics.containerproxy.service.leader.GlobalEventLoopService.lambda$new$0(GlobalEventLoopService.java:54) ~[containerproxy-1.1.1.jar!/:1.1.1]
        at java.base/java.lang.Thread.run(Thread.java:840) ~[na:na]

I am assuming it has something to do with the Redis restore mode. What fixes this is restarting the shinyproxy sp pod, after restart everything works as expected.

I am just testing removing store-mode to redis, and having it not recover apps or anything.

Configuration:

apiVersion: openanalytics.eu/v1
kind: ShinyProxy
metadata:
  annotations:
    meta.helm.sh/release-name: shinyproxy
    meta.helm.sh/release-namespace: public-shinyproxy
  creationTimestamp: "2025-01-02T21:47:48Z"
  generation: 25
  labels:
    app.kubernetes.io/managed-by: Helm
  name: shinyproxy
  namespace: public-shinyproxy
  resourceVersion: "194510553"
  uid: 675aeb85-fab8-4bd7-8007-9e7b0c92e990
spec:
  app-namespaces:
  - EXAMPLE
  fqdn: public.apps.EXAMPLE.com
  image: openanalytics/shinyproxy:3.1.1
  image-pull-policy: Always
  kubernetesIngressPatches: |
    - op: add
      path: /spec/ingressClassName
      value: azure-application-gateway
    - op: add
      path: /spec/rules/0/host
      value: public.apps.EXAMPLE.com
    - op: add
      path: /spec/rules/0/http/paths/0/path
      value: /
    - op: add
      path: /spec/tls
      value:
        - hosts:
          - public.apps.EXAMPLE.com
          secretName: shinyproxy-server-cert
    - op: add
      path: /metadata/annotations
      value:
        cert-manager.io/cluster-issuer: prod-letsencrypt-production
        cert-manager.io/acme-challenge-type: "dns01"
        kubernetes.io/tls-acme: "true"
        appgw.ingress.kubernetes.io/ssl-redirect: "true"
        appgw.ingress.kubernetes.io/health-probe-interval: 240
        # https://support.openanalytics.eu/t/null-pointer-exception-error-when-deploying-at-production-server/2997
        appgw.ingress.kubernetes.io/health-probe-status-codes: "200,500"
  kubernetesPodTemplateSpecPatches: |
    - op: add
      path: /spec/containers/0/env/-
      value:
        name: REDIS_PASSWORD
        valueFrom:
          secretKeyRef:
            name: redis
            key: REDIS_PASSWORD
    - op: add
      path: /spec/containers/0/resources
      value:
        limits:
          cpu: 1
          memory: 1Gi
        requests:
          cpu: 60m
          memory: 400Mi
    - op: add
      path: /spec/serviceAccountName
      value: shinyproxy-sa
    - op: add
      path: /spec/volumes/-
      value:
          name: data
          persistentVolumeClaim:
            claimName: azureblob-nfs-public-shinyproxy-pvc
    - op: add
      path: /spec/containers/0/volumeMounts/-
      value:
          name: data
          mountPath: /mnt/data
          readOnly: true
    - op: add
      path: /spec/securityContext
      value:
        fsGroup: 0
    - op: add
      path: /metadata/annotations
      value:
        prometheus.io/scrape: "true"
        prometheus.io/port: "9090"
        prometheus.io/path: "/actuator/prometheus"
  management:
    prometheus:
      metrics:
        export:
          enabled: true
  proxy:
    authentication: none
    container-backend: kubernetes
    container-wait-time: "120000"
    default-proxy-max-lifetime: "360"
    kubernetes:
      internal-networking: true
      namespace: public-shinyproxy
    landing-page: /
    logo-url: file://...
    port: 8080
    same-site-cookie: None
    specs:
    - allow-container-re-use: "true"
      container-env:
        WWW_ROOT_PATH: '#{proxy.getRuntimeValue(''SHINYPROXY_PUBLIC_PATH'')}'
      container-image: example
      description: ''
      display-name: example
      hide-navbar-on-main-page-link: "true"
      id: example
      kubernetes-pod-patches: |
        - op: replace
          path: /metadata/namespace
          value: example
      labels:
        managed_by: terraform
      logo-url: ...
      max-lifetime: "-1"
      max-total-instances: "2"
      minimum-seats-available: "1"
      port: 3838
      resource-name: sp-pod-pub-example-#{proxy.id}-0
      seats-per-container: "10"
    template-path: /mnt/data/2col
    title: Apps
    usage-stats-url: micrometer
  server:
    forward-headers-strategy: native
    frame-options: sameorigin
    secure-cookies: "true"
  spring:
    data:
      redis:
        database: 1
        host: redis.public-shinyproxy.svc.cluster.local
        password: example
        ssl:
          enabled: false
    session:
      store-type: redis

@LEDfan LEDfan added the question Further information is requested label Feb 13, 2025
@LEDfan
Copy link
Member

LEDfan commented Feb 13, 2025

Hi, are you using redis sentinel? If you want redis to continue working while restarting services, it's better to user redis sentinel, since it run multiple replicas and can survive a restart. You can easily deploy it using our kustomize base: https://github.com/openanalytics/shinyproxy-operator/tree/master/docs/deployment/bases/redis-sentinel

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants