Kickoff of cloud native discussion #7323
Replies: 5 comments 2 replies
-
I've added a mindmap to facilitate the early discussion at https://planetf1.github.io/notes/cloudnative/ |
Beta Was this translation helpful? Give feedback.
-
Hi Nigel, the SVG link of the mindmap does not seem to be correct. I get a 404. |
Beta Was this translation helpful? Give feedback.
-
call recording - password kFcyMcW7 |
Beta Was this translation helpful? Give feedback.
-
I've added a workgroup page on the wiki - see https://wiki.lfaidata.foundation/display/EG/Cloud+Native . just a quick version so far |
Beta Was this translation helpful? Give feedback.
-
Mandy has added a summary of the discussion face to face in the Jan 2023 project newsletter. |
Beta Was this translation helpful? Give feedback.
-
At our recent Egeria team face to face, we agreed to start up a work-group on ‘cloud native’.
Egeria’s current service model is based on our ‘server chassis’ or platform, within which we host ’servers’. This is a java process
These servers can be of varying types including metadata repository, integration daemon servers, metadata access point, view services (UI support), and others.
These servers then host 1 or more services – be it OMRS (for a repository), an OMAS (ie Asset Consumer access service), an OMIS (integration service)
The configuration specification is geared around JSON configuration documents. These documents can be stored/provided, or built incrementally by configuration APIs. Additionally management of the services they define I through APIs (ie Admin Services & Platform services )
Our current container support which I’ve developed over the last few years (starting from the perils of running a lab at IBM Think!) is mostly oriented around a single container image for the core java process ie platform. We have helm charts that setup 1 or more platforms plus ancillary components – UIs, Kafka etc. These have been designed to support education, demo, tutorials, getting-started, but not production. They have, however, been used by many as a base, including for production with more work around ci/cd.
We also have a Kubernetes Operator with a single CRD that will setup/manage platforms including scaling, but make use of existing configuration documents. This provides active control, but only to the platform level. This hasn’t been released. We were about to look at further control within the limits of the existing configuration management support.
Whilst discussed when the operator work started, recently, and especially at the face to face community meetings in Amsterdam, the subject of better ‘cloud native’ support has come up. This is important for for some community members as a way of running in an enterprise.It may be less important for small deployments. The existing operator work has shown that the configuration/operation management is compromised by our current platform architecture – and also, cloud native is a broader discussion than just an operator.
We are forming a workgroup. One of the early tasks will be to define what we mean by ‘cloud native’ and what are the characteristics we need, plus levels of granularity. At this point it likely mean:
I have arranged the first meeting for 1500 UTC on Wed 25 Jan 2023. If you would like an invite please respond here, but we will continue to update here and through issues in github, and feedback on developer, community & TSC calls.
I suggest opening up specific topic threads on areas for discussion. Looking forward to developing new ideas and solutions together!
Beta Was this translation helpful? Give feedback.
All reactions