You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When looking at project management from an operator perspective, the multi-tenancy decision to "lock" project references by id to a namespace seems contradictory to using a projectRef.
For example, a AtlasDatabaseUser in namespace A can set any projectRef.name and projectRef.namespace which would allow them to create users in any project as long as it exists in the cluster. A cluster admin would be responsible to build policies on top of this to stop users from accessing certain projects, which makes sense and there are a lot of tools that we can leverage here.
Although, when using externalProjectRef.id I cannot set a namespace as it defaults to the connectionSecret in the same namespace.
This has lead us to need to import project into k8s and set the status to the project id while setting an annotation to skip reconciliation just for the sake of allowing tenants in multiple namespaces to reference the project.
What did you expect?
It would be ideal if the same was true for referencing a project by id. It should be up to the cluster admins/operators to build guardrails that would stop people from referencing projects they shouldn't have access to. The less secure route is to copy the project credential to all namespaces, but this introduces operational overhead and several security risks which is a non starter for us.
At the current state, bringing a project into k8s without its proper configuration (which requires org owner permissions to do so when importing via the atlas cli) can lead to settings getting wiped out, such as private endpoints, custom roles, ip access list, etc.
The text was updated successfully, but these errors were encountered:
This originally stems from the fact that multiple teams can be part of the same project(s).
Initially we wanted to use the externalProjectRef.id and not need to manage the project in k8s as we had pieces of it already managed in terraform.
Reflector works for a use case that a tenant in a namespace does not have RBAC to secrets (for a shared project) or if they own the projects. If we did this, every tenant would get access to a project owner credential. Having to create a credential for each tenant for the same project seems like unnecessary overhead when the ideal scenario would be a single project credential in a dedicated admin namespace.
What did you do to encounter the bug?
When looking at project management from an operator perspective, the multi-tenancy decision to "lock" project references by id to a namespace seems contradictory to using a
projectRef
.For example, a AtlasDatabaseUser in namespace A can set any projectRef.name and projectRef.namespace which would allow them to create users in any project as long as it exists in the cluster. A cluster admin would be responsible to build policies on top of this to stop users from accessing certain projects, which makes sense and there are a lot of tools that we can leverage here.
Although, when using
externalProjectRef.id
I cannot set a namespace as it defaults to theconnectionSecret
in the same namespace.This has lead us to need to import project into k8s and set the status to the project id while setting an annotation to skip reconciliation just for the sake of allowing tenants in multiple namespaces to reference the project.
What did you expect?
It would be ideal if the same was true for referencing a project by id. It should be up to the cluster admins/operators to build guardrails that would stop people from referencing projects they shouldn't have access to. The less secure route is to copy the project credential to all namespaces, but this introduces operational overhead and several security risks which is a non starter for us.
At the current state, bringing a project into k8s without its proper configuration (which requires org owner permissions to do so when importing via the atlas cli) can lead to settings getting wiped out, such as private endpoints, custom roles, ip access list, etc.
The text was updated successfully, but these errors were encountered: