-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
oarsub -t deploy -I does not move the user to its slice #128
Comments
With genepi-21 configured as a OAR frontend (and deploy frontend) + pam_systemd added to common-session, I get:
-> seems ok... but it does not work the same with the real Grid'5000 frontend, I don't know why. To debug, I dumped the command executed to connect to the deploy frontend:
|
Work also ok with oarsub -C [jobid]
|
My test environment is :
|
Same with non interactive job:
|
so what is the difference between your setup (where indeed the result looks fine) and Grid'5000 frontends? |
My guess:
|
the frontends were built by upgrading wheezy systems. So yes, it's possible that something was left behind. But I did not see anything obvious... |
Closing the issue here, follow-up on g5k's bugzilla: https://intranet.grid5000.fr/bugzilla/show_bug.cgi?id=6560. |
user-116 is actually the oar user (uid 116) -> no good in fact.
which is a system slice, not a user slice as I see in my tests -> weird |
using
But it does not work without "systemd-run --scope":
Also it does not work without su in the loop, e.g.:
or
NB: also note group 0 in the latter output. => Seems like oardodo should maybe:
|
A way to get a slice with only oardodo (but a 'run' slice, not a 'session' one) :
This does not go though pam however. |
Next stop could be:
|
Regarding talking to pam, since oardodo is in C, good starting points are: It would probably be better to just talk to PAM, and not call systemd-run manually. |
after oarsub -t deploy -I, I would expect my shell to be in my user slice, according to systemd-cgls.
But this is not the case. my shell (or the script I run inside my job) is located in the cgroup named "oar-node.service"
This is probably because the login mechanism used by OAR does not go through PAM.
As a result, it is not possible to enforce per-slice limits, for example.
The text was updated successfully, but these errors were encountered: