You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am attempting to run a couple apps in a public setting. So this mean't I needed to use "Container pre-initialization and sharing". Apps are setup to not shutdown, to be shared amongst users, scale, etc:
No shinyproxy login or anything since it's public. Everything runs well till there is maintenance that forces everything to restart, once this occurs ShinyProxy is not able to start the apps back up, and when a user attempts to go to an app in the site it will not work. Here is the error seen in ShinyProxy operator:
2025-01-26T01:22:12.822Z WARN 1 --- [ main] io.undertow.websockets.jsr : UT026010: Buffer pool was not set on WebSocketDeploymentInfo, the default pool will be used
2025-01-26T01:22:12.899Z INFO 1 --- [ main] io.undertow.servlet : Initializing Spring embedded WebApplicationContext
2025-01-26T01:22:12.902Z INFO 1 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 7082 ms
2025-01-26T01:22:13.304Z INFO 1 --- [ main] e.o.c.service.IdentifierService : ShinyProxy runtimeId: gdlm
2025-01-26T01:22:13.509Z INFO 1 --- [ main] e.o.c.service.IdentifierService : ShinyProxy instanceID (hash of config): 3e165bd6a7a78e14914370f1ea64ab5d0b4b3ae8
2025-01-26T01:22:13.510Z INFO 1 --- [ main] e.o.c.service.IdentifierService : ShinyProxy realmId: shinyproxy-public-shinyproxy
2025-01-26T01:22:13.511Z INFO 1 --- [ main] e.o.c.service.IdentifierService : ShinyProxy version: 1737734318344
CUT ---
2025-01-26T01:23:54.177Z INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend : [user=null proxyId=a16546b0-24fc-400d-ac5e-a64aa632f6e7 specId=c3332-edia] Kubernetes Pod not ready yet, trying again (24/40)
2025-01-26T01:23:55.194Z INFO 1 --- [haringScaler-15] e.o.c.b.d.p.ProxySharingScaler : [specId=oi delegateProxyId=0cfa4531-33c1-4f06-bb82-12321180d9f1 seatId=712cef26-2e53-4905-8fdc-c37c3c655b9a] Created Seat
2025-01-26T01:23:55.194Z INFO 1 --- [haringScaler-15] e.o.c.b.d.p.ProxySharingScaler : [specId=oi delegateProxyId=0cfa4531-33c1-4f06-bb82-12321180d9f1 seatId=082cfbe8-00e1-4cf1-a157-9386cdbb9953] Created Seat
2025-01-26T01:23:55.194Z INFO 1 --- [haringScaler-15] e.o.c.b.d.p.ProxySharingScaler : [specId=oi delegateProxyId=0cfa4531-33c1-4f06-bb82-12321180d9f1 seatId=1081402b-806f-4183-b747-8ada1feba3cc] Created Seat
2025-01-26T01:23:55.194Z INFO 1 --- [haringScaler-15] e.o.c.b.d.p.ProxySharingScaler : [specId=oi delegateProxyId=0cfa4531-33c1-4f06-bb82-12321180d9f1 seatId=1f776a7e-f6a4-43ff-a1dd-42217858665c] Created Seat
2025-01-26T01:23:55.194Z INFO 1 --- [haringScaler-15] e.o.c.b.d.p.ProxySharingScaler : [specId=oi delegateProxyId=0cfa4531-33c1-4f06-bb82-12321180d9f1 seatId=69d43d76-53ed-489a-a12c-6e59c0e6704c] Created Seat
2025-01-26T01:23:55.194Z INFO 1 --- [haringScaler-15] e.o.c.b.d.p.ProxySharingScaler : [specId=oi delegateProxyId=0cfa4531-33c1-4f06-bb82-12321180d9f1 seatId=5a7a889a-5f55-4fba-92e7-fb62905287b4] Created Seat
2025-01-26T01:23:55.194Z INFO 1 --- [haringScaler-15] e.o.c.b.d.p.ProxySharingScaler : [specId=oi delegateProxyId=0cfa4531-33c1-4f06-bb82-12321180d9f1 seatId=51eddb04-ea6d-4765-8a57-dea125408b30] Created Seat
2025-01-26T01:23:55.194Z INFO 1 --- [haringScaler-15] e.o.c.b.d.p.ProxySharingScaler : [specId=oi delegateProxyId=0cfa4531-33c1-4f06-bb82-12321180d9f1 seatId=afdce53e-44d7-4176-a952-5338e76c08da] Created Seat
2025-01-26T01:23:55.194Z INFO 1 --- [haringScaler-15] e.o.c.b.d.p.ProxySharingScaler : [specId=oi delegateProxyId=0cfa4531-33c1-4f06-bb82-12321180d9f1 seatId=7b959444-04df-425f-85df-4bfcf5af5621] Created Seat
2025-01-26T01:23:55.194Z INFO 1 --- [haringScaler-15] e.o.c.b.d.p.ProxySharingScaler : [specId=oi delegateProxyId=0cfa4531-33c1-4f06-bb82-12321180d9f1 seatId=2b6e4f86-4ec2-4e5c-9ef7-51667d1ef617] Created Seat
2025-01-26T01:23:55.660Z INFO 1 --- [haringScaler-15] e.o.c.b.d.p.ProxySharingScaler : [specId=oi delegateProxyId=0cfa4531-33c1-4f06-bb82-12321180d9f1] Started DelegateProxy
2025-01-26T01:23:55.984Z INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend : [user=null proxyId=85e23122-fd67-4801-8f4a-f1927ade75d8 specId=EXAMPLE-P] Kubernetes Pod not ready yet, trying again (25/40)
2025-01-26T01:23:56.184Z INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend : [user=null proxyId=a16546b0-24fc-400d-ac5e-a64aa632f6e7 specId=c3332-edia] Kubernetes Pod not ready yet, trying again (25/40)
2025-01-26T01:23:57.990Z INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend : [user=null proxyId=85e23122-fd67-4801-8f4a-f1927ade75d8 specId=EXAMPLE-P] Kubernetes Pod not ready yet, trying again (26/40)
2025-01-26T01:23:58.190Z INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend : [user=null proxyId=a16546b0-24fc-400d-ac5e-a64aa632f6e7 specId=c3332-edia] Kubernetes Pod not ready yet, trying again (26/40)
2025-01-26T01:24:00.003Z INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend : [user=null proxyId=85e23122-fd67-4801-8f4a-f1927ade75d8 specId=EXAMPLE-P] Kubernetes Pod not ready yet, trying again (27/40)
2025-01-26T01:24:00.196Z INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend : [user=null proxyId=a16546b0-24fc-400d-ac5e-a64aa632f6e7 specId=c3332-edia] Kubernetes Pod not ready yet, trying again (27/40)
2025-01-26T01:24:02.009Z INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend : [user=null proxyId=85e23122-fd67-4801-8f4a-f1927ade75d8 specId=EXAMPLE-P] Kubernetes Pod not ready yet, trying again (28/40)
2025-01-26T01:24:02.203Z INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend : [user=null proxyId=a16546b0-24fc-400d-ac5e-a64aa632f6e7 specId=c3332-edia] Kubernetes Pod not ready yet, trying again (28/40)
2025-01-26T01:24:04.016Z INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend : [user=null proxyId=85e23122-fd67-4801-8f4a-f1927ade75d8 specId=EXAMPLE-P] Kubernetes Pod not ready yet, trying again (29/40)
2025-01-26T01:24:04.209Z INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend : [user=null proxyId=a16546b0-24fc-400d-ac5e-a64aa632f6e7 specId=c3332-edia] Kubernetes Pod not ready yet, trying again (29/40)
2025-01-26T01:24:05.276Z INFO 1 --- [xecutorLoop-1-2] i.l.core.protocol.ConnectionWatchdog : Reconnecting, last destination was redis.public-shinyproxy.svc.cluster.local/10.16.243.69:6379
2025-01-26T01:24:05.277Z INFO 1 --- [xecutorLoop-1-2] i.l.core.protocol.ConnectionWatchdog : Reconnecting, last destination was redis.public-shinyproxy.svc.cluster.local/10.16.243.69:6379
2025-01-26T01:24:05.277Z INFO 1 --- [xecutorLoop-1-1] i.l.core.protocol.ConnectionWatchdog : Reconnecting, last destination was redis.public-shinyproxy.svc.cluster.local/10.16.243.69:6379
2025-01-26T01:24:05.281Z WARN 1 --- [ioEventLoop-4-2] i.l.core.protocol.ConnectionWatchdog : Cannot reconnect to [redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379]: Connection refused: redis.public-shinyproxy.svc.cluster.local/10.16.243.69:6379
2025-01-26T01:24:05.281Z WARN 1 --- [ioEventLoop-4-1] i.l.core.protocol.ConnectionWatchdog : Cannot reconnect to [redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379]: Connection refused: redis.public-shinyproxy.svc.cluster.local/10.16.243.69:6379
2025-01-26T01:24:05.282Z WARN 1 --- [ioEventLoop-4-2] i.l.core.protocol.ConnectionWatchdog : Cannot reconnect to [redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379]: Connection refused: redis.public-shinyproxy.svc.cluster.local/10.16.243.69:6379
2025-01-26T01:24:06.022Z INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend : [user=null proxyId=85e23122-fd67-4801-8f4a-f1927ade75d8 specId=EXAMPLE-P] Kubernetes Pod not ready yet, trying again (30/40)
2025-01-26T01:24:06.216Z INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend : [user=null proxyId=a16546b0-24fc-400d-ac5e-a64aa632f6e7 specId=c3332-edia] Kubernetes Pod not ready yet, trying again (30/40)
2025-01-26T01:24:08.030Z INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend : [user=null proxyId=85e23122-fd67-4801-8f4a-f1927ade75d8 specId=EXAMPLE-P] Kubernetes Pod not ready yet, trying again (31/40)
2025-01-26T01:24:08.226Z INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend : [user=null proxyId=a16546b0-24fc-400d-ac5e-a64aa632f6e7 specId=c3332-edia] Kubernetes Pod not ready yet, trying again (31/40)
2025-01-26T01:24:10.037Z INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend : [user=null proxyId=85e23122-fd67-4801-8f4a-f1927ade75d8 specId=EXAMPLE-P] Kubernetes Pod not ready yet, trying again (32/40)
2025-01-26T01:24:10.243Z INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend : [user=null proxyId=a16546b0-24fc-400d-ac5e-a64aa632f6e7 specId=c3332-edia] Kubernetes Pod not ready yet, trying again (32/40)
2025-01-26T01:24:12.046Z INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend : [user=null proxyId=85e23122-fd67-4801-8f4a-f1927ade75d8 specId=EXAMPLE-P] Kubernetes Pod not ready yet, trying again (33/40)
2025-01-26T01:24:12.249Z INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend : [user=null proxyId=a16546b0-24fc-400d-ac5e-a64aa632f6e7 specId=c3332-edia] Kubernetes Pod not ready yet, trying again (33/40)
2025-01-26T01:24:14.051Z INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend : [user=null proxyId=85e23122-fd67-4801-8f4a-f1927ade75d8 specId=EXAMPLE-P] Kubernetes Pod not ready yet, trying again (34/40)
2025-01-26T01:24:14.256Z INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend : [user=null proxyId=a16546b0-24fc-400d-ac5e-a64aa632f6e7 specId=c3332-edia] Kubernetes Pod not ready yet, trying again (34/40)
2025-01-26T01:24:14.276Z INFO 1 --- [xecutorLoop-1-2] i.l.core.protocol.ConnectionWatchdog : Reconnecting, last destination was redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379
2025-01-26T01:24:14.276Z INFO 1 --- [xecutorLoop-1-1] i.l.core.protocol.ConnectionWatchdog : Reconnecting, last destination was redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379
2025-01-26T01:24:14.276Z INFO 1 --- [xecutorLoop-1-2] i.l.core.protocol.ConnectionWatchdog : Reconnecting, last destination was redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379
2025-01-26T01:24:14.279Z WARN 1 --- [ioEventLoop-4-1] i.l.core.protocol.ConnectionWatchdog : Cannot reconnect to [redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379]: Connection refused: redis.public-shinyproxy.svc.cluster.local/10.16.243.69:6379
2025-01-26T01:24:14.279Z WARN 1 --- [ioEventLoop-4-2] i.l.core.protocol.ConnectionWatchdog : Cannot reconnect to [redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379]: Connection refused: redis.public-shinyproxy.svc.cluster.local/10.16.243.69:6379
2025-01-26T01:24:14.279Z WARN 1 --- [ioEventLoop-4-2] i.l.core.protocol.ConnectionWatchdog : Cannot reconnect to [redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379]: Connection refused: redis.public-shinyproxy.svc.cluster.local/10.16.243.69:6379
2025-01-26T01:24:16.057Z INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend : [user=null proxyId=85e23122-fd67-4801-8f4a-f1927ade75d8 specId=EXAMPLE-P] Kubernetes Pod not ready yet, trying again (35/40)
2025-01-26T01:24:16.262Z INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend : [user=null proxyId=a16546b0-24fc-400d-ac5e-a64aa632f6e7 specId=c3332-edia] Kubernetes Pod not ready yet, trying again (35/40)
2025-01-26T01:24:18.062Z INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend : [user=null proxyId=85e23122-fd67-4801-8f4a-f1927ade75d8 specId=EXAMPLE-P] Kubernetes Pod not ready yet, trying again (36/40)
2025-01-26T01:24:18.282Z INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend : [user=null proxyId=a16546b0-24fc-400d-ac5e-a64aa632f6e7 specId=c3332-edia] Kubernetes Pod not ready yet, trying again (36/40)
2025-01-26T01:24:20.071Z INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend : [user=null proxyId=85e23122-fd67-4801-8f4a-f1927ade75d8 specId=EXAMPLE-P] Kubernetes Pod not ready yet, trying again (37/40)
2025-01-26T01:24:20.288Z INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend : [user=null proxyId=a16546b0-24fc-400d-ac5e-a64aa632f6e7 specId=c3332-edia] Kubernetes Pod not ready yet, trying again (37/40)
2025-01-26T01:24:22.077Z INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend : [user=null proxyId=85e23122-fd67-4801-8f4a-f1927ade75d8 specId=EXAMPLE-P] Kubernetes Pod not ready yet, trying again (38/40)
2025-01-26T01:24:22.293Z INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend : [user=null proxyId=a16546b0-24fc-400d-ac5e-a64aa632f6e7 specId=c3332-edia] Kubernetes Pod not ready yet, trying again (38/40)
2025-01-26T01:24:22.476Z INFO 1 --- [xecutorLoop-1-1] i.l.core.protocol.ConnectionWatchdog : Reconnecting, last destination was redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379
2025-01-26T01:24:22.476Z INFO 1 --- [xecutorLoop-1-2] i.l.core.protocol.ConnectionWatchdog : Reconnecting, last destination was redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379
2025-01-26T01:24:22.477Z INFO 1 --- [xecutorLoop-1-1] i.l.core.protocol.ConnectionWatchdog : Reconnecting, last destination was redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379
2025-01-26T01:24:22.478Z WARN 1 --- [ioEventLoop-4-2] i.l.core.protocol.ConnectionWatchdog : Cannot reconnect to [redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379]: Connection refused: redis.public-shinyproxy.svc.cluster.local/10.16.243.69:6379
2025-01-26T01:24:22.478Z WARN 1 --- [ioEventLoop-4-1] i.l.core.protocol.ConnectionWatchdog : Cannot reconnect to [redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379]: Connection refused: redis.public-shinyproxy.svc.cluster.local/10.16.243.69:6379
2025-01-26T01:24:22.478Z WARN 1 --- [ioEventLoop-4-1] i.l.core.protocol.ConnectionWatchdog : Cannot reconnect to [redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379]: Connection refused: redis.public-shinyproxy.svc.cluster.local/10.16.243.69:6379
2025-01-26T01:24:24.084Z INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend : [user=null proxyId=85e23122-fd67-4801-8f4a-f1927ade75d8 specId=EXAMPLE-P] Kubernetes Pod not ready yet, trying again (39/40)
2025-01-26T01:24:24.094Z ERROR 1 --- [haringScaler-15] e.o.c.b.d.p.ProxySharingScaler : [delegateProxyId=85e23122-fd67-4801-8f4a-f1927ade75d8 specId=EXAMPLE-P] Failed to start DelegateProxy
eu.openanalytics.containerproxy.ProxyFailedToStartException: Container with index 0 failed to start
at eu.openanalytics.containerproxy.backend.AbstractContainerBackend.startProxy(AbstractContainerBackend.java:132) ~[containerproxy-1.1.1.jar!/:1.1.1]
at eu.openanalytics.containerproxy.backend.dispatcher.proxysharing.ProxySharingScaler.lambda$createDelegateProxyJob$5(ProxySharingScaler.java:410) ~[containerproxy-1.1.1.jar!/:1.1.1]
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) ~[na:na]
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[na:na]
at java.base/java.lang.Thread.run(Thread.java:840) ~[na:na]
Caused by: eu.openanalytics.containerproxy.ContainerFailedToStartException: Kubernetes Pod did not start in time
at eu.openanalytics.containerproxy.backend.kubernetes.KubernetesBackend.startContainer(KubernetesBackend.java:356) ~[containerproxy-1.1.1.jar!/:1.1.1]
at eu.openanalytics.containerproxy.backend.AbstractContainerBackend.startProxy(AbstractContainerBackend.java:126) ~[containerproxy-1.1.1.jar!/:1.1.1]
... 6 common frames omitted
2025-01-26T01:24:24.300Z INFO 1 --- [haringScaler-15] e.o.c.b.kubernetes.KubernetesBackend : [user=null proxyId=a16546b0-24fc-400d-ac5e-a64aa632f6e7 specId=c3332-edia] Kubernetes Pod not ready yet, trying again (39/40)
2025-01-26T01:24:24.313Z ERROR 1 --- [haringScaler-15] e.o.c.b.d.p.ProxySharingScaler : [delegateProxyId=a16546b0-24fc-400d-ac5e-a64aa632f6e7 specId=c3332-edia] Failed to start DelegateProxy
eu.openanalytics.containerproxy.ProxyFailedToStartException: Container with index 0 failed to start
at eu.openanalytics.containerproxy.backend.AbstractContainerBackend.startProxy(AbstractContainerBackend.java:132) ~[containerproxy-1.1.1.jar!/:1.1.1]
at eu.openanalytics.containerproxy.backend.dispatcher.proxysharing.ProxySharingScaler.lambda$createDelegateProxyJob$5(ProxySharingScaler.java:410) ~[containerproxy-1.1.1.jar!/:1.1.1]
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) ~[na:na]
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[na:na]
at java.base/java.lang.Thread.run(Thread.java:840) ~[na:na]
Caused by: eu.openanalytics.containerproxy.ContainerFailedToStartException: Kubernetes Pod did not start in time
at eu.openanalytics.containerproxy.backend.kubernetes.KubernetesBackend.startContainer(KubernetesBackend.java:356) ~[containerproxy-1.1.1.jar!/:1.1.1]
at eu.openanalytics.containerproxy.backend.AbstractContainerBackend.startProxy(AbstractContainerBackend.java:126) ~[containerproxy-1.1.1.jar!/:1.1.1]
... 6 common frames omitted
2025-01-26T01:24:38.876Z INFO 1 --- [xecutorLoop-1-1] i.l.core.protocol.ConnectionWatchdog : Reconnecting, last destination was redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379
2025-01-26T01:24:38.876Z INFO 1 --- [xecutorLoop-1-2] i.l.core.protocol.ConnectionWatchdog : Reconnecting, last destination was redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379
2025-01-26T01:24:38.876Z INFO 1 --- [xecutorLoop-1-2] i.l.core.protocol.ConnectionWatchdog : Reconnecting, last destination was redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379
2025-01-26T01:24:38.877Z WARN 1 --- [ioEventLoop-4-1] i.l.core.protocol.ConnectionWatchdog : Cannot reconnect to [redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379]: Connection refused: redis.public-shinyproxy.svc.cluster.local/10.16.243.69:6379
2025-01-26T01:24:38.877Z WARN 1 --- [ioEventLoop-4-2] i.l.core.protocol.ConnectionWatchdog : Cannot reconnect to [redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379]: Connection refused: redis.public-shinyproxy.svc.cluster.local/10.16.243.69:6379
2025-01-26T01:24:38.877Z WARN 1 --- [ioEventLoop-4-2] i.l.core.protocol.ConnectionWatchdog : Cannot reconnect to [redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379]: Connection refused: redis.public-shinyproxy.svc.cluster.local/10.16.243.69:6379
2025-01-26T01:25:05.264Z WARN 1 --- [ck-leadership-1] o.s.i.redis.util.RedisLockRegistry : The UNLINK command has failed (not supported on the Redis server?); falling back to the regular DELETE command: Redis command timed out
2025-01-26T01:25:08.976Z INFO 1 --- [xecutorLoop-1-1] i.l.core.protocol.ConnectionWatchdog : Reconnecting, last destination was redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379
2025-01-26T01:25:08.976Z INFO 1 --- [xecutorLoop-1-2] i.l.core.protocol.ConnectionWatchdog : Reconnecting, last destination was redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379
2025-01-26T01:25:08.977Z INFO 1 --- [xecutorLoop-1-2] i.l.core.protocol.ConnectionWatchdog : Reconnecting, last destination was redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379
2025-01-26T01:25:08.983Z INFO 1 --- [ioEventLoop-4-1] i.l.core.protocol.ReconnectionHandler : Reconnected to redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379
2025-01-26T01:25:08.983Z INFO 1 --- [ioEventLoop-4-2] i.l.core.protocol.ReconnectionHandler : Reconnected to redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379
2025-01-26T01:25:08.983Z INFO 1 --- [ioEventLoop-4-1] i.l.core.protocol.ReconnectionHandler : Reconnected to redis.public-shinyproxy.svc.cluster.local/<unresolved>:6379
2025-01-26T01:25:08.986Z INFO 1 --- [ck-leadership-1] e.o.c.s.leader.redis.RedisLeaderService : This server (runtimeId: gdlm) is no longer the leader.
2025-01-26T01:25:08.986Z INFO 1 --- [GlobalEventLoop] e.o.c.b.d.p.ProxySharingScaler : [specId=oi] Scale up required, trying to create 1 DelegateProxies
2025-01-26T01:25:08.988Z INFO 1 --- [GlobalEventLoop] e.o.c.b.d.p.ProxySharingScaler : [specId=oi delegateProxyId=4fc57de4-aca9-472a-8bdc-d4838efeecf0] Creating DelegateProxy
2025-01-26T01:25:08.988Z ERROR 1 --- [GlobalEventLoop] e.o.c.s.leader.GlobalEventLoopService : Error while processing event in the GlobalEventLoop Callback[callback=eu.openanalytics.containerproxy.backend.dispatcher.proxysharing.ProxySharingScaler$$Lambda$1659/0x00007ffa01a3f160@3ecb8c0f, onlyIfLeader=true]:
java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@297bb039[Not completed, task = java.util.concurrent.Executors$RunnableAdapter@1c26088e[Wrapped task = eu.openanalytics.containerproxy.backend.dispatcher.proxysharing.ProxySharingScaler$$Lambda$1711/0x00007ffa01a67118@44a3d112]] rejected from java.util.concurrent.ThreadPoolExecutor@5f5da92a[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 2]
at java.base/java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2065) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:833) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1365) ~[na:na]
at java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:123) ~[na:na]
at eu.openanalytics.containerproxy.backend.dispatcher.proxysharing.ProxySharingScaler.scaleUp(ProxySharingScaler.java:364) ~[containerproxy-1.1.1.jar!/:1.1.1]
at eu.openanalytics.containerproxy.backend.dispatcher.proxysharing.ProxySharingScaler.reconcile(ProxySharingScaler.java:326) ~[containerproxy-1.1.1.jar!/:1.1.1]
at eu.openanalytics.containerproxy.service.leader.GlobalEventLoopService.lambda$new$0(GlobalEventLoopService.java:54) ~[containerproxy-1.1.1.jar!/:1.1.1]
at java.base/java.lang.Thread.run(Thread.java:840) ~[na:na]
2025-01-26T01:25:09.043Z INFO 1 --- [ck-leadership-1] e.o.c.s.leader.redis.RedisLeaderService : This server (runtimeId: gdlm) is now the leader.
2025-01-26T01:25:09.044Z INFO 1 --- [GlobalEventLoop] e.o.c.b.d.p.ProxySharingScaler : [specId=oi delegateProxyId=4fc57de4-aca9-472a-8bdc-d4838efeecf0] Pending DelegateProxy not created by this instance, marking for removal
2025-01-26T01:25:09.049Z INFO 1 --- [GlobalEventLoop] e.o.c.b.d.p.ProxySharingScaler : [specId=oi] Scale up required, trying to create 1 DelegateProxies
2025-01-26T01:25:09.049Z INFO 1 --- [GlobalEventLoop] e.o.c.b.d.p.ProxySharingScaler : [specId=oi delegateProxyId=f4da310e-f3a7-4a48-a817-42cab79b3457] Creating DelegateProxy
2025-01-26T01:25:09.050Z ERROR 1 --- [GlobalEventLoop] e.o.c.s.leader.GlobalEventLoopService : Error while processing event in the GlobalEventLoop Callback[callback=eu.openanalytics.containerproxy.backend.dispatcher.proxysharing.ProxySharingScaler$$Lambda$1702/0x00007ffa01a5f630@6fc1f5b6, onlyIfLeader=true]:
java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@6b3947f1[Not completed, task = java.util.concurrent.Executors$RunnableAdapter@2822067b[Wrapped task = eu.openanalytics.containerproxy.backend.dispatcher.proxysharing.ProxySharingScaler$$Lambda$1711/0x00007ffa01a67118@28109a09]] rejected from java.util.concurrent.ThreadPoolExecutor@5f5da92a[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 2]
at java.base/java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2065) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:833) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1365) ~[na:na]
at java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:123) ~[na:na]
at eu.openanalytics.containerproxy.backend.dispatcher.proxysharing.ProxySharingScaler.scaleUp(ProxySharingScaler.java:364) ~[containerproxy-1.1.1.jar!/:1.1.1]
at eu.openanalytics.containerproxy.backend.dispatcher.proxysharing.ProxySharingScaler.reconcile(ProxySharingScaler.java:326) ~[containerproxy-1.1.1.jar!/:1.1.1]
at eu.openanalytics.containerproxy.service.leader.GlobalEventLoopService.lambda$new$0(GlobalEventLoopService.java:54) ~[containerproxy-1.1.1.jar!/:1.1.1]
at java.base/java.lang.Thread.run(Thread.java:840) ~[na:na]
2025-01-26T01:25:09.051Z INFO 1 --- [GlobalEventLoop] e.o.c.b.d.p.ProxySharingScaler : [specId=EXAMPLE-P] Scale up required, trying to create 1 DelegateProxies
2025-01-26T01:25:09.052Z INFO 1 --- [GlobalEventLoop] e.o.c.b.d.p.ProxySharingScaler : [specId=EXAMPLE-P delegateProxyId=25082613-ea66-40af-9f28-3ae8b6c78508] Creating DelegateProxy
2025-01-26T01:25:09.052Z ERROR 1 --- [GlobalEventLoop] e.o.c.s.leader.GlobalEventLoopService : Error while processing event in the GlobalEventLoop Callback[callback=eu.openanalytics.containerproxy.backend.dispatcher.proxysharing.ProxySharingScaler$$Lambda$1702/0x00007ffa01a5f630@bcd9e04, onlyIfLeader=true]:
java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@74f398a1[Not completed, task = java.util.concurrent.Executors$RunnableAdapter@432dea95[Wrapped task = eu.openanalytics.containerproxy.backend.dispatcher.proxysharing.ProxySharingScaler$$Lambda$1711/0x00007ffa01a67118@620a2bce]] rejected from java.util.concurrent.ThreadPoolExecutor@14ffafcc[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 2]
at java.base/java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2065) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:833) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1365) ~[na:na]
at java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:123) ~[na:na]
at eu.openanalytics.containerproxy.backend.dispatcher.proxysharing.ProxySharingScaler.scaleUp(ProxySharingScaler.java:364) ~[containerproxy-1.1.1.jar!/:1.1.1]
at eu.openanalytics.containerproxy.backend.dispatcher.proxysharing.ProxySharingScaler.reconcile(ProxySharingScaler.java:326) ~[containerproxy-1.1.1.jar!/:1.1.1]
at eu.openanalytics.containerproxy.service.leader.GlobalEventLoopService.lambda$new$0(GlobalEventLoopService.java:54) ~[containerproxy-1.1.1.jar!/:1.1.1]
at java.base/java.lang.Thread.run(Thread.java:840) ~[na:na]
I am assuming it has something to do with the Redis restore mode. What fixes this is restarting the shinyproxy sp pod, after restart everything works as expected.
I am just testing removing store-mode to redis, and having it not recover apps or anything.
Hi,
I am attempting to run a couple apps in a public setting. So this mean't I needed to use "Container pre-initialization and sharing". Apps are setup to not shutdown, to be shared amongst users, scale, etc:
No shinyproxy login or anything since it's public. Everything runs well till there is maintenance that forces everything to restart, once this occurs ShinyProxy is not able to start the apps back up, and when a user attempts to go to an app in the site it will not work. Here is the error seen in ShinyProxy operator:
I am assuming it has something to do with the Redis restore mode. What fixes this is restarting the shinyproxy sp pod, after restart everything works as expected.
I am just testing removing store-mode to redis, and having it not recover apps or anything.
Configuration:
The text was updated successfully, but these errors were encountered: