Skip to content
This repository has been archived by the owner on Jun 8, 2022. It is now read-only.

[Feature] Consider polymorphic for ContainerizedWorkload implementation #211

Open
resouer opened this issue Sep 18, 2020 · 10 comments
Open

Comments

@resouer
Copy link
Contributor

resouer commented Sep 18, 2020

Note: this is a requirement from two end users (https://github.com/goodrain/rainbond and https://xlauncher.com/) so it seems worth a design and fix.

For now, in OAM Kubernetes runtime the child resource of ContainerizedWorkload is Deployment. But the users hope this become polymorphic so in certain cases and generate StatefulSet instead.

The users' proposal is let persistent volume trait determine if StatefulSet should be generated. Not very sure if it's reliable enough though, open for discussion.

The implementation is straightforward, when persistent volume trait is attached to certain component, OAM runtime will add a label on generated ContainerizedWorkload CR metadata to indicate its child resource is a Deployment or StatefulSet.

@resouer
Copy link
Contributor Author

resouer commented Sep 18, 2020

/cc @krmayankk

@xishengcai
Copy link
Contributor

please assign me

@xishengcai
Copy link
Contributor

xishengcai commented Sep 30, 2020

容器工作负载多态设计:

1.功能目标

  1. oam ContainerizedWorkload child resource 支持 动态工作负载,原先只支持deployment,现在
    是在第一次初始化的工作负载的时候,发现如果添加了 volume trait,则更改工作负载为statefulSet,否则默认是deployment。
    多态只在第一次apply container workload 的时候,决定子负载类型,后面增删volumeTrait不会影响子负载类型。

2. 新增trait CRD volumeTrait

  1. 用户面信息
  • 存储类名称
  • 存储大小
  • 容器内挂载路径
  1. CRD
type VolumeTrait struct {
	metav1.TypeMeta   `json:",inline"`
	metav1.ObjectMeta `json:"metadata,omitempty"`

	Spec   VolumeTraitSpec   `json:"spec,omitempty"`
	Status VolumeTraitStatus `json:"status,omitempty"`
}

// A VolumeTraitSpec defines the desired state of a VolumeTrait.
type VolumeTraitSpec struct {
	VolumeList []VolumeMountItem `json:"volumeList"`
	// WorkloadReference to the workload this trait applies to.
	WorkloadReference runtimev1alpha1.TypedReference `json:"workloadRef"`
}

type VolumeMountItem struct {
	ContainerIndex int        `json:"containerIndex"`
	Paths          []PathItem `json:"paths"`
}

type PathItem struct {
	StorageClassName string             `json:"storageClassName"`
	Size             string             `json:"size"`
	Path             string             `json:"path"`
}

3. 实现逻辑

  1. appconfig 新增动作
  • 查询compoent 的trait,如果有volumeTrait,则给containerWorkload 添加label app.oam.dev/childResource=StatefulSet
  1. workload 新增动作
  • 根据label: app.oam.dev/childResource 创建工作负载 [deployment or statefulset]
  1. volumentTrait 功能
  • 找到 containerWorkload 及其子资源
  • 根据volumeTrait 信息创建pvc, 生成 volumeMounts 和 volumes
  • 更新 volumeMounts 和 volumes

@resouer
Copy link
Contributor Author

resouer commented Sep 30, 2020

@ryanzhang-oss Please help to review the implementation plan? It generally looks good to me.

@xishengcai Please contribute the code to https://github.com/oam-dev/catalog/tree/master/workloads , we will keep this repo clean and provide building block primitives of OAM only.

@captainroy-hy
Copy link
Contributor

VolumeTrait changes the type of a workload's child resources, which may result in an unpredictable impact or breaking changes on current trait&scope interacting with those child resources. If traits&scopes are going to deal with such changes, they need a general way to know the component/workload is stateless or stateful. Policy #33 may be a better mechanism to indicate a component/workload is stateful or stateless, instead of a label.

BTW, this featrue seems similar to issue #13 .

@ryanzhang-oss
Copy link
Collaborator

Agree With @captainroy-hy, using a trait to determine the behavior of a workload controller is the designed behavior. In addition, I don't know what to put in the childResources fields in the workloadDefinition CR in this case.

@resouer
Copy link
Contributor Author

resouer commented Oct 1, 2020

@ryanzhang-oss according to this implementation plan, I believe we could reserve some value like childResource: dynamic or simply leave it empty. In either case, oam runtime will then check the label on workload CR to determine what the child resource is.

The concern I can see here is this design will couple with volume trait in oam runtime. The alternative IMO is let ContainerizedWorkload controller to check if volume trait is bound with this Component, then generate child resource and patch child resource label to the workload CR. It should also handle workload CR update/delete properly. Basically defer the logic to workload controller instead of appconfig controller. /cc @xishengcai WDYT?

@captainroy-hy OAM policy is actually equivalent to special labels on Component, so you are basically suggesting using label instead of volume trait to determine the childResource.

In this case, we can decouple this feature further by introducing a component level label such as component.oam.dev/persistent-storage=true|false. So the volume trait bound will automatically add this label to Component, and ContainerizedWorkload controller could choose to response this label or not.

@xishengcai
Copy link
Contributor

@LEI Zhang (Harry) component label ContainerizedWorkload, volumeTrait create or delete pvc(can't update has created pvc). code has been implemented: https://github.com/xishengcai/oam-kubernetes-runtime/tree/caixisheng

@wonderflow
Copy link
Member

let ContainerizedWorkload controller to check if volume trait is bound with this Component, then generate child resource and patch child resource label to the workload CR.

I agree, we could do this work in podspecworkload in kubevela

@resouer
Copy link
Contributor Author

resouer commented Oct 17, 2020

@xishengcai It's great to see the PoC code is shipped! Let's we split the workload and trait code from runtime and move them to https://github.com/oam-dev/catalog? The plan is we will not maintain any workload/trait/scope code in runtime in the future.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants