-
Notifications
You must be signed in to change notification settings - Fork 210
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add option to clean remote folder when deleting nodes #6732
Comments
Somehow related: #4693 |
Hi @superstar54 , I’m interested in solving this issue. Could you guide me on how to get started? For example, are there any specific steps I should follow, or resources I should review to better understand the context of this issue? |
Hi @Kaustbh , thanks for the interest. You may need to wait for the maintainers to decide. |
I'm already working on this issue, soon will open a PR on it. |
At the moment, in aiida-core, only In aiida-core, at the moment, only calcjob and workchain will have remote resource that need to cleaned so I think it makes no sense to add functionality to add --clean-remote-folder to all node type since there are tons of other node type that has no remote folder attached. Even if you decide to add this to aiida-core. will it make more sense to add to |
Nothing special to WorkGraph. From a user's point of view, an option to clean the remote folder when deleting the node is very useful. Note when you delete a calcfunction node, it could also delete the associated WorkChain/CalcJob, thus the
Could you explain why it makes more sense to add to |
Where you see the calls of workchain and calcjob from a calcfunction?
Besides WorkChain and WorkGraph, I really don't see any other cases you were mentioning. What is your real example? What is the node you were deleted?
The option exist, and that is
Because only "processes" will have remote workdir attached. |
I may have a better solution. From my comment above: "Now the option you asked is a batch operation of "search the descendants calcjobs and then call It can actually provide a command that do "search the decent calcjobs" and we find a way to pipe the verdi commands. Then you can do batch operations using the stdout from previous command. Worst case, have two command separately implemented and having a command using click to call them in sequence. |
Because of the As a user, when I delete a node, I want to have an option to delete the associated data, both locally and remotely.
I don't think it's a good solution, because each |
@unkcpz The problem is a process does not have a workdir. And even we take the meaning vaguely that bring about more issues, such as a process first has to stop or killed, etc.. While workdir are properties of nodes not processes, if anything general I'd go with |
It is not there by default in @khsrali's PR it is done by
If you mean to clean the workdir of
@khsrali, only |
RemoteData are also getting deleted based on traversal rules.
I understand your point. That's why I suggest
That's not true, calcjobs also have working directory. This thread is getting too long, hard to follow, also is deviating from the main points that we wanted to address. |
I think this would be a solution.
Sure. |
Another update: Pipeline commands can be a solution, the
Yes, it is out the scope of such discussion. |
I want to clean the remote folder when I delete a node. For the moment, I need to find all the call jobs that are called by this node, and clean the workdir, and then delete the node.
It would be useful to add an option to clean the remote folders when deleting the node.
e.g.,
verdi node delete --clear-workdir <pk>
The text was updated successfully, but these errors were encountered: