You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Aug 2, 2023. It is now read-only.
I wanted to have a conversation about introducing a cluster scoped CRD for Kabanero. This is a somewhat major design change but I think it will solve some technical challenges that we have coming up in the next couple of releases, as well as fit our overall install into the operator framework in a better way.
We have some requirements (#242, and #237 desires it as well) for Kabanero to manage a cluster-scoped object. I'll use the landing page as an example. The landing page describes how to use Kabanero as a whole. It also modifies the OpenShift console (by creating ConsoleLink CR instances) which is a singleton in the cluster. When there are multiple instances of Kabanero CR, it is difficult to know which version of the landing page to use when applying the ConsoleLInk CRs to the OpenShift console. Since the console refers to a route to the landing page for a specific instance of Kabanero, it is cumbersome to update the console when the instance of Kabanero that the console is using is removed. Another Kabanero instance must be selected and used instead.
Our current strategy for installing cluster scope objects is to use an install script. This script exists outside the operator framework, and while this makes initial installation easy, it makes it difficult to modify these components once they are installed. We install other software (kAppNav, tekton dashboard/webhook) at the cluster scope in the install script. These should really be managed by the Kabanero operator as well. It will be difficult to manage the upgrade of kAppNav instances and the dashboard using the install script. It would be better if these were manged by the operator, where the operator could read the version of Kabanero that is installed, and apply the appropriate YAMLs depending on the version.
A second problem that we have to solve, is supporting pipelines in alternate namespaces (#230). This is in preparation for multiple Kabanero workspaces on a cluster. Our current solution involves subscribing to the Kabanero operator in multiple namespaces, thereby allowing the operator to watch for Collection objects in all of those namespaces. This method of installing seems cumbersome for the administrator, who must now update the subscription each time a new namespace is to be added as a pipeline namespace. This seems prone to error since the Kabanero operator is not (and should not be) managing the subscription. The support for multiple namespace subscriptions is not well documented, does not offer any OpenShift console help, and appears to not be currently recommended by OLM.
I wanted to propose an alternate solution which addresses these problems:
Introduce a cluster-scope CRD. This CRD would manage the landing page (and its associated OpenShift console modifications), kAppNav, the tekton dashboard, and potentially the kabanero webhook. These are all things that are installed at the cluster scope. I am thinking that the CRD would not contain a version field, and that it would just install the latest version of these things that are available, but that is open for discussion.
Kabanero operator is installed at the cluster scope, watching for the new CRD, and for Kabanero instances, in the whole cluster. The operator would no longer watch for Collection instances.
Separate the collection controller into its own component (see Consider separation of lifecycle concerns with the collection controller #58). When a Kabanero instance is created, the kabanero controller will deploy the collection controller to any configured pipeline namespaces (or the owning namespace if no pipeline namespaces are configured). The collection controller will be configured to watch for Collection objects only in the namespace where it is deployed. The Collection object would be modified with the information necessary to read the collection hub where it came from, download and install its requisite pipelines, tasks, etc.
I would propose that these updates be done concurrently with #250 as well, bringing the operator current with the operator sdk. These proposed changes are not trivial and so it makes sense to bring the runtime current before we start.
The text was updated successfully, but these errors were encountered:
List of work to do:
Config
cluster scope CRD Introduce a cluster scopedConfig
CRD #703Team
CRD Introduce theTeam
CRD and depricateKabanero
CRD #704Config
controller Move events-operator CRDs into the config controller #708More to come.....
I wanted to have a conversation about introducing a cluster scoped CRD for Kabanero. This is a somewhat major design change but I think it will solve some technical challenges that we have coming up in the next couple of releases, as well as fit our overall install into the operator framework in a better way.
We have some requirements (#242, and #237 desires it as well) for Kabanero to manage a cluster-scoped object. I'll use the landing page as an example. The landing page describes how to use Kabanero as a whole. It also modifies the OpenShift console (by creating
ConsoleLink
CR instances) which is a singleton in the cluster. When there are multiple instances ofKabanero
CR, it is difficult to know which version of the landing page to use when applying theConsoleLInk
CRs to the OpenShift console. Since the console refers to a route to the landing page for a specific instance ofKabanero
, it is cumbersome to update the console when the instance ofKabanero
that the console is using is removed. AnotherKabanero
instance must be selected and used instead.Our current strategy for installing cluster scope objects is to use an install script. This script exists outside the operator framework, and while this makes initial installation easy, it makes it difficult to modify these components once they are installed. We install other software (kAppNav, tekton dashboard/webhook) at the cluster scope in the install script. These should really be managed by the Kabanero operator as well. It will be difficult to manage the upgrade of kAppNav instances and the dashboard using the install script. It would be better if these were manged by the operator, where the operator could read the version of Kabanero that is installed, and apply the appropriate YAMLs depending on the version.
A second problem that we have to solve, is supporting pipelines in alternate namespaces (#230). This is in preparation for multiple Kabanero workspaces on a cluster. Our current solution involves subscribing to the Kabanero operator in multiple namespaces, thereby allowing the operator to watch for
Collection
objects in all of those namespaces. This method of installing seems cumbersome for the administrator, who must now update the subscription each time a new namespace is to be added as a pipeline namespace. This seems prone to error since the Kabanero operator is not (and should not be) managing the subscription. The support for multiple namespace subscriptions is not well documented, does not offer any OpenShift console help, and appears to not be currently recommended by OLM.I wanted to propose an alternate solution which addresses these problems:
Kabanero
instances, in the whole cluster. The operator would no longer watch forCollection
instances.Kabanero
instance is created, the kabanero controller will deploy the collection controller to any configured pipeline namespaces (or the owning namespace if no pipeline namespaces are configured). The collection controller will be configured to watch forCollection
objects only in the namespace where it is deployed. TheCollection
object would be modified with the information necessary to read the collection hub where it came from, download and install its requisite pipelines, tasks, etc.I would propose that these updates be done concurrently with #250 as well, bringing the operator current with the operator sdk. These proposed changes are not trivial and so it makes sense to bring the runtime current before we start.
The text was updated successfully, but these errors were encountered: