-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Supposed conflict between k8s 1.14 and 1.17 imports #53
Comments
Hi @arnaudbos , thanks for opening this issue and for opening the PR! The 1.17 link and the example definitely needed to be fixed, thanks for taking care of that! :) With regards to the types compatibility, I mentioned in #54 that we can go different routes:
1 and 2 fix the issue but are less ergonomic for the user, while 3 is better for the user but less ergonomic for the maintainers of hand-crafted packages. I think we can take the maintainer tax (so adopt solution 3) if we build some automation around replacing and freezing the different versions (e.g. a mantainer only writes 1.14, and then runs a script to do it in the other versions). I'm not sure if there is a fourth solution that will address all of this, so 3 seems the more likely to me right now. What do you think? Would you have time to help with this? |
Yes I'd like to help with this. However, I think simply keeping the I you agree, as per your suggestion, I will simply delete the other PR and open a new one with just a I think we can keep this issue open to discuss further. I'd be happy to have guidance on how to automate things. I don't quite know how to start yet, but I'd be worried about the combinatorial explosion of maintaining every version of argo/argocd/ambassador/etc on top of every kubernetes variation. In the mean time, I will use my local "patched" version to get stuff done at work 🙂 |
Thank you for your help :)
I agree with this, I think fixing the link in the 1.17 file wouldn't hurt either since right now it's pointing to the wrong location, but i can take care of that separately 👍
I think we should have the following directory structure: kubernetes
/ base
/ argocd
/ argo
/ cert-manager
... etc
k8s.dhall -- contains link to k8s 1.14
/ 1.14
/ argocd
/ argo
/ cert-manager
... etc
k8s.dhall -- contains link to k8s 1.14
/ 1.15
/ argocd
/ argo
/ cert-manager
... etc
k8s.dhall -- contains link to k8s 1.15
... etc where "base" contains the file manually authored, and then the other directories contain a copy of them. The only different file should be
Much like what happens with helm charts usually, I don't think the goal of this project should be supporting multiple versions of the underlying apps. It's up to each app's maintainer (contributors are more than welcome) to keep versions up to date. If a user needs to point to a older version, they can just refer to an older tag / commit hash, if it exists. The only case i think it's worth supporting is different kubernetes versions. We should aim to support all versions until they are EOL or cloud providers don't offer an upgrade (AWS for example are very behind schedule, but many users are on AWS and they shouldn't be left behind). This is quite a big design decision for this repository, so I'll post this issue on the dhall discourse and hopefully someone more from the community can chime in. Maybe there is a better solution than what I proposed here.
Sounds good, thanks for your patience! |
I agree with @arnaudbos that we should probably avoid a combinatorial explosion of multiple supported versions for each dependency. I would go with the "low-tech" solution of picking only one Kubernetes version to support. From my perspective it is okay if this project is opinionated and doesn't address every potential user's needs. Even the users who cannot use this project will still find it useful as a starting point that they can learn from and or fork for their own internal needs. I did a quick search for supported Kubernetes versions for various cloud providers and found: Given that, I propose that we only need to support Kubernetes version 1.14 and that should cover most cloud providers. I can also change the default |
@Gabriel439: I didn't mean for the versions to be combinatorial. @arnaudbos: Have you tried applying the 1.14 manifests to your cluster? They should work on a 1.17 cluster. Or do you need some fields from the 1.17 |
@amarrella: The scope of this repository isn't just Kubernetes, though. My understanding from the name of the repository and the |
@Gabriel439 you are right, it's not a kubernetes only repository, all kubernetes dependencies are in the OK, if there is no user need for the replication i'll old off from it (at least for now). I'd like to provide users a better way of handling this use case though, but perhaps this repo is not the place where this should happen. |
* Fix k8s/1.17.dhall reference to upstream 1.14 * Fix README example to point to 1.14 (see EarnestResearch#53) * Fix argocd/Project/Type.dhall pointing to different commit
Another route would be to split the repo in two, between k8s-related packages and non-k8s-related packages? It would allow a specific directory structure in one without impacting the other. Let me explain. @amarrella I think the structure you've described above works. Maybe it could be "extended" to allow contributors to add any package version (e.g. argo 1.2.0), not just the latest, to whichever k8s version they like. My point is that:
indeed, allows going backward in time, as a user, to a previous version if it has existed. For instance, let's say the Adding subdirectories for package versions would allow a contributor to provide package 1.4.0 even though 1.4.2, for instance, already exists.
However, I'd be worried that such specific directory structure could mislead users/contributors into thinking that it is expected that all versions of each package should be supported/contributed. (It certainly happened to me when I came across this repo!) Splitting the repo in two would segregate the curious case of the kubernetes package from "traditional" packages for which such nested structure is unnecessary. I would also advise adding a note into the README to explain the repository structure, and explicitly state why there may be holes between versions and that contributors are welcome to provide new versions of any package, be it forward or backward according to the set of versions already contributed. My two cents. |
@arnaudbos: I still don't think it's desirable to support multiple versions even if it's technically feasible. The problem with doing so is that it leads to poor use of developer resources due to spreading the community/ecosystem thin which in turn leads to a lower level of quality. |
Yes it's true, I understand. On the topic of only supporting one version, what's Kubernetes' story in terms of forward compatibility? You've said 1.14 would be the soundest choice, is it a safe bet? I'm willing to believe it, just asking for confirmation. |
@arnaudbos I believe for most cases the types will be forward compatible, but:
So I expect to update the version to This is also motivated by the fact that many new versions of open source kubernetes packages don't support 1.14 already anymore |
Thanks for the link. So I guess it's settled? Status quo until EKS bumps to 1.15 and then bump here too? I'd still be favorable to a note about this repo policy in the README though. A note to explicitly surface the supported versions of the packages themselves wouldn't hurt either (current versions for Argo and Argo CD are 1.4.2 and 2.5.2, respectively, and at first glance it's hard to tell which versions the dhall packages map to). |
@arnaudbos I agree versions should be documented for easier discoverability. I think I will create a dhall file in each major package directory documenting the version and the maintainer and add information about that in the README. For Kubernetes, I'll also add a note to state that the current supported version is |
Sorry, I have been quite busy and didn't do this yet but it's on my list of things to do! Meanwhile, looks like Amazon released 1.15: https://aws.amazon.com/about-aws/whats-new/2020/03/amazon-eks-now-supports-kubernetes-version-1-15/ I'll wait a few days for tooling to catch up and then 1.15 will become the official supported version. |
Hello folks 👋
First of all, thank you for your work, and if there's a preferred way to ask questions than opening an issue, please redirect me (as I didn't find anything) and slam this issue closed.
I'm experimenting with Dhall and Argo Workflows (and intend to compare with Pachyderm, FWIW) and I'm new to both.
The rationale for using Dhall for me is I'm sick of YAML and Helm, basically.
I've tried the example from the readme, even though it's Argo CD and not Workflows it's okay because I'm just getting started.
But for some reason the
ObjectMeta
type keeps failing me.I get
I've messed around this example a bit, removed the
.schema
part and other things. And eventually I've found something more interesting with an example which, I think, is more illustrative.deployment
is from dhall-kubernetes' readme and works ok, butapplication
fails even though they're using the samek8s.ObjectMeta::
defaults completion.I get
I'm not sure if my intuition is correct, but here I go:
I think there is an issue with the definition of types and defaults inside
dhall-packages/kubernetes/k8s/
.These are the two reasons leading me to this assumption:
initializers
The first line of the error message
{ metadata : { - initializers : …
suggests thatinitializers
is missing.However, if it is indeed specified in dhal-kubernetes 1.14 (line 10), it is no longer defined in dhal-kubernetes 1.17.
dhall-packages/kubernetes/k8s/1.17.dhall
points to dhall-kubernetes 1.14As you can see in this file, the link points to 1.14 instead of 1.17.
I have a feeling that this is linked to PR #41 but am not sure.
I'm very unfamiliar with Dhall as I said, so the whole
1.14
thing at the beginning of files, such as this one confuses me.Anyways, I hope this is useful. If I'm wrong or mistaken in any way, please let me know so that I can diagnose further what's going on with my experiment.
The text was updated successfully, but these errors were encountered: