-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a shutting down message to polykey agent start
command
#157
Comments
hmm i think this should just be a message that goes into stderr. It's a feedback msg imo, so it's fitting |
@tegefaulkes The issue is still persisting.
|
Various hours after the demo with Wen, I checked my Activity Monitor and noticed that Polykey was still actively running in the background on my machine. I had ran control+C on the terminal where I was running my polykey commands but the initial terminal where I ran polykey agent start had been closed and its possible that I did not properly shutdown Polykey with control+c.... which could explain why it was still running. However, with all my terminal applications closed, I was able to open a new run and simply run Point is, Wen would run There's some inconsistencies in the shut down process that are possibly due to various known unknown factors (known unknowns because Wen's commands and sessions were not identical to mine) and so maybe it is important that there is more responsiveness in knowing if Polykey agent has been shut down properly or if its still running (and its status) if one were to use it several hours after initiating the program... |
It is intended for there to be multiple Polykey agents in a desktop. So when you do PK agent stop, you have to beware which one it is talking to. The main one is the default home path. Agents is determined by node path. Already reported earlier that CTRL-C has problems when doing it in foreground. But backgrounder agents always require pk agent stop. Note that if you run PK agent start twice off the same node path it should prevent the second command from running. |
I'm worried that there may be a weird interaction with Keep in mind that |
Ok, no problem running
|
The problem is, I can't re-create this and I have very little context to what it could be. Does it happen every time? Or only sometimes? What are you doing when it happens. Can you narrow down a cause or procedure to trigger it? For me to really get a better guess to whats happening I need some logging when it happens. For example when I run this with logging I get the following.
These logs will help a lot in working out what's wrong because it could be a few things. It could be ignoring the signal outright or some domain that is shutting down is taking a long time/deadlocked. I need to know where to look. |
I've added a callback to the the |
I can't seem to figure out how to reproduce what the comments are describing, so i'll leave it here. |
@tegefaulkes To clarify, what would help going forward is to always run the command |
When testing always run the agent in the foreground in a separate terminal. You can do it verbose |
No don't add options for no reason. Just learn how to do process redirection or redirect stdout or stderr. |
I'm not saying we add it for no reason. It's already a feature. const backgroundOutFile = new commander.Option(
'-bof, --background-out-file <path>',
'Path to STDOUT for agent process',
);
const backgroundErrFile = new commander.Option(
'-bef, --background-err-file <path>',
'Path to STDERR for agent process',
); Though it only applies to a back-grounded agent. An agent running in the foreground should use redirection. |
@amydevs Is this done? I recall you made some changes relating to this. |
I actually do see this:
However I'm not sure if it looks as nice it can be. It's sort of because |
Still needs to be verified |
There was a duplicate in linear for this issue. Seemed to auto close this despite removing the link first.. |
When stopping in forground with
and via
So I'm calling this one done. |
So this idea of a message at the end. I didn't get around to reviewing this. But there are some guidelines/rules to CLI programs that is important.
So I can see that There are several problems with this.
So the real question is why do we even need this message? Is it because our agent stopping process is buggy? That's what the |
We added it because of the lack of feedback. @CryptoTotalWar at the time was running into issues where the agent wasn't stopping properly so adding this message makes it clear if it responded at all when triggered to stop. |
Yes I'm aware of that, but my arguments above indicate that such a message doesn't fit within the rest of the product design… and there are better ways of doing it. |
I think some indication of stopping is needed regardless of the logging level. It adds a degree of responsiveness we don't get otherwise right now. The agent takes a moment to gracefully stop. So as a user without that feedback it feels unresponsive and in the worst case you can't tell if anything is happening. So I say we do include a stderr message for the agent stopping as a form of feedback. |
The message needs to match the logging standard. You can do a warn for this shutting down. |
New issue at #270 for fixing this. |
Specification
Right now the agent seems un-responsive when stopping. Generally there are two ways we can stop the agent.
ctrl+c
to the running agent.polykey agent stop
command for the agent.Both will cause the
polykey
agent to start shutting down but no messages are output when doing this. So unless you ran the agent with the verbose-v
flag you can't tell anything is happening until it exits.To address this we need to add a message when the agent starts to shut down. We need to hook onto this event on the
PolykeyAgent
and outputAgent is shutting down
andAgent has shut down
messages.There is a possibility that the agent is actually failing to shut down via the
agent stop
command. But I've been unable to replicate this in brief testing on nix and the mac machine. In any case it's something to keep an eye out for.Additional context
Tasks
The text was updated successfully, but these errors were encountered: