You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Variety of issues encountered while creating a systemd service with miden-proving-service start-proxy. Some of these could be user error.
All of these were with a single existing prover worker on a separate instance.
Unkillable
The proving service does not respond to kill commands. This was previously already mentioned #964 (comment).
e.g. systemctl kill miden-proving-service does not react for many seconds.
Lack of logs
There are no info logs to indicate startup was sucessful. debug logs are also quite noisy and hard to interpret even when idle.
Duplex ports don't work
It appears impossible to configure the proxy to re-use the same port as the worker.
As in, if the worker is on port X then the proxy cannot host its gRPC on port X. This seems.. weird and incorrect. The proxy does allow this configuration, but the proxy is not accessible from the outside.
There are no logs that indicate any problems, the only symptom is lack of connection - the proxy doesn't appear to open the port.
Switching to a different proxy port, or a different worker port solves this issue.
toml and start-proxy
Workers from the start command automatically get added to the toml config file as a form of persistance. This feels incorrect - or rather, what's the expected way to add a worker to the proxy?
How do I script this? We initially started with the systemd service start command having miden-proving-service start-proxy <worker> as per the readme, but that actually seems incorrect since it redoes this command on startup.
I imagine instead we should have done a once-off of:
It was unclear if there would be some sort of conflict from this. Probably not, but while debugging it was a form of confusion that there were multiple places this worker was now defined.
Startup health-checks
I think the proxy only starts up if all workers are present and healthy? As in the proxy performs a worker health-check before starting. This means a bad worker blocks the entire proxy from starting.
Instead I would expect no health-checks at all aside from the normal periodic health checks that it normally does. The proxy gRPC service should not be blocked by workers being not present.
I may be wrong here; was just a feeling after reading some of the code and facing issues with startup and a failing worker connection.
The text was updated successfully, but these errors were encountered:
Packages versions
miden-proving-service
:v0.7
Bug description
Variety of issues encountered while creating a systemd service with
miden-proving-service start-proxy
. Some of these could be user error.All of these were with a single existing prover worker on a separate instance.
Unkillable
The proving service does not respond to kill commands. This was previously already mentioned #964 (comment).
e.g.
systemctl kill miden-proving-service
does not react for many seconds.Lack of logs
There are no
info
logs to indicate startup was sucessful.debug
logs are also quite noisy and hard to interpret even when idle.Duplex ports don't work
It appears impossible to configure the proxy to re-use the same port as the worker.
As in, if the worker is on port
X
then the proxy cannot host its gRPC on portX
. This seems.. weird and incorrect. The proxy does allow this configuration, but the proxy is not accessible from the outside.There are no logs that indicate any problems, the only symptom is lack of connection - the proxy doesn't appear to open the port.
Switching to a different proxy port, or a different worker port solves this issue.
toml
andstart-proxy
Workers from the start command automatically get added to the toml config file as a form of persistance. This feels incorrect - or rather, what's the expected way to add a worker to the proxy?
How do I script this? We initially started with the systemd service start command having
miden-proving-service start-proxy <worker>
as per the readme, but that actually seems incorrect since it redoes this command on startup.I imagine instead we should have done a once-off of:
It was unclear if there would be some sort of conflict from this. Probably not, but while debugging it was a form of confusion that there were multiple places this worker was now defined.
Startup health-checks
I think the proxy only starts up if all workers are present and healthy? As in the proxy performs a worker health-check before starting. This means a bad worker blocks the entire proxy from starting.
Instead I would expect no health-checks at all aside from the normal periodic health checks that it normally does. The proxy gRPC service should not be blocked by workers being not present.
I may be wrong here; was just a feeling after reading some of the code and facing issues with startup and a failing worker connection.
The text was updated successfully, but these errors were encountered: