Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs/user/vsphere: Add static IP via Afterburn info #4121

Conversation

cgwalters
Copy link
Member

I didn't test this myself, but we should have landed the work
for 4.6.

@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
To complete the pull request process, please assign jcpowermac
You can assign the PR to them by writing /assign @jcpowermac in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@cgwalters
Copy link
Member Author

/assign jcpowermac

@cgwalters
Copy link
Member Author

Also installer team: We should think about a way to use this in IPI scenarios. Anything of this form will require something like extending install-config with "per node configuration" which also gets into openshift/machine-config-operator#1720

### Configuring RHCOS VMs with static IP addresses

As of OpenShift 4.6, you may use a new `guestinfo.afterburn.initrd.network-kargs` to configure static IP addressing.
The CoreOS "afterburn" component will lookup for the well-known key `guestinfo.afterburn.initrd.network-kargs` and use its value instead of the default.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this only sets the first boot and what about after that??

Should we show a complete example here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm you're right, we may need to document here how to also embed the static addressing in the Ignition config to persist for subsequent boots. I will get back to you on this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm that is a good point what if I have to re-address my RHCOS virtual machine?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Create a new machine, i.e. reprovision.

Copy link
Contributor

@lucab lucab Sep 1, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This plugs into the same flow as the bare-metal one: kargs should get translated into NM configuration in the initramfs, and that is forwarded to the real rootfs.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This plugs into the same flow as the bare-metal one: kargs should get translated into NM configuration in the initramfs, and that is forwarded to the real rootfs.

OK cool, that's what I hoped but I wasn't entirely sure if kargs injected by cmdline.d worked the same in that respect as actual kargs.

(Did you test that?)

@jcpowermac
Copy link
Contributor

The PR seems fine to me as far as documentation. My only concern is the confusion between our current terraform ip addressing and the addition of the extra config.

Was static addressing a feature for 4.6? I only see this possible for 4.6 in UPI with documentation change but I doubt QE has tested it. I also think we should be updating the UPI terraform template to replace our current method of addressing (which is different than #3533). And is something we should already be testing in CI for 4.6?

Also installer team: We should think about a way to use this in IPI scenarios. Anything of this form will require something like extending install-config with "per node configuration" which also gets into openshift/machine-config-operator#1720

Based that is a guest's extra config I would think MAO be a better location at least for compute.
https://github.com/openshift/machine-api-operator/blob/master/pkg/controller/vsphere/reconciler.go#L41-L45
The installer could set it for control plane.

@abhinavdahiya
Copy link
Contributor

The PR seems fine to me as far as documentation. My only concern is the confusion between our current terraform ip addressing and the addition of the extra config.

Was static addressing a feature for 4.6? I only see this possible for 4.6 in UPI with documentation change but I doubt QE has tested it. I also think we should be updating the UPI terraform template to replace our current method of addressing (which is different than #3533). And is something we should already be testing in CI for 4.6?

Also installer team: We should think about a way to use this in IPI scenarios. Anything of this form will require something like extending install-config with "per node configuration" which also gets into openshift/machine-config-operator#1720

"per node configuration" is something I'm not on board at all for configuration using IPI, IMO that goes against what we want the experience to look like for self-managing autoscaling default experience for IPI clusters. If we can look at how to allow users to provider the installer will a range of IP addresses that the cluster can use to add static ip to one of the Machine created that is something I'm more inclined towards as next steps.

Based that is a guest's extra config I would think MAO be a better location at least for compute.
https://github.com/openshift/machine-api-operator/blob/master/pkg/controller/vsphere/reconciler.go#L41-L45
The installer could set it for control plane.

@jcpowermac
Copy link
Contributor

"per node configuration" is something I'm not on board at all for configuration using IPI, IMO that goes against what we want the experience to look like for self-managing autoscaling default experience for IPI clusters. If we can look at how to allow users to provider the installer will a range of IP addresses that the cluster can use to add static ip to one of the Machine created that is something I'm more inclined towards as next steps.

Completely agree. I am hoping this operator that metal uses might help with that but it is certainly not something that I have had a chance to investigate:
https://github.com/metal3-io/ip-address-manager

@cgwalters
Copy link
Member Author

"per node configuration" is something I'm not on board at all for configuration using IPI, IMO that goes against what we want the experience to look like for self-managing autoscaling default experience for IPI clusters. If we can look at how to allow users to provider the installer will a range of IP addresses that the cluster can use to add static ip to one of the Machine created that is something I'm more inclined towards as next steps.

Yeah...I think we may need to create a term like "external-installer-automated-provisioned-infrastructure" (extIPI? dunno) - basically people who are automating UPI installs (much like we are in CI for UPI). In some scenarios it's OK not to have autoscaling/machineAPI. You're probably right that extIPI wouldn't be provided by openshift-install itself, but it seems to me we should at least leave the door open to shipping something higher level that does aid in automating UPI, somewhat like the assisted installer for metal.

Anyways I think we're all agreed let's just target this for UPI.

@abhinavdahiya
Copy link
Contributor

/test e2e-azure

I didn't test this myself, but we should have landed the work
for 4.6.  This is much more ergonomic for users than appending
to the Ignition configs, and also solves the core problem that
we need networking set up in the initramfs to fetch the real
config from the Machine Config Server.
@cgwalters cgwalters force-pushed the vsphere-upi-document-afterburn-static-ip branch from a2a6b8c to 2a388aa Compare September 1, 2020 15:11
@cgwalters
Copy link
Member Author

OK I noticed there were two existing sections on static IP and hostname; I consolidated them and replaced with this.

However, clearly this would be much more compelling if we updated the existing Terraform code for UPI vSphere with static IPs to use this...I don't speak TF well enough to do that myself.

@cgwalters
Copy link
Member Author

Was static addressing a feature for 4.6?

Yes, it was part of https://github.com/openshift/enhancements/blob/master/enhancements/rhcos/static-networking-enhancements.md

I only see this possible for 4.6 in UPI with documentation change but I doubt QE has tested it.

Agree.

I also think we should be updating the UPI terraform template to replace our current method of addressing (which is different than #3533). And is something we should already be testing in CI for 4.6?

Yes to both! I know we're somewhat late in the cycle now but this is a big ergonomic and implementation benefit for vSphere UPI. Would you have a few cycles to look at this?

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Sep 1, 2020

@cgwalters: The following tests failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
ci/prow/e2e-azure 086fc1916f81ce2bcaf0701f59dad519c8b68657 link /test e2e-azure
ci/prow/e2e-metal-ipi 2a388aa link /test e2e-metal-ipi
ci/prow/e2e-crc 2a388aa link /test e2e-crc

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

]
},
}
govc vm.power -on "${VM_NAME}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

govc is a community tool (offered by vsphere).
Who supports the use/distribution and maintenance of this tool?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think UPI installations may necessarily involve some tools not shipped by Red Hat. The goal here is more to outline suggested approaches, not to require any specific tooling. However we probably should have guidance on this. I don't have expertise in vSphere enough to say, but I'm sure there are alternatives to using govc that we could discuss in a separate section.

@mtnbikenc
Copy link
Member

/uncc

@openshift-ci-robot openshift-ci-robot removed the request for review from mtnbikenc September 23, 2020 19:29
@LorbusChris
Copy link
Member

I'm trying what's described here in #3533, but it's still not working.

@tvanderka
Copy link

FYI this worked for me on OKD:
guestinfo.afterburn.initrd.network-kargs=ip=10.20.35.21::10.20.35.1:24:okd-worker-1::none:10.20.0.11

Creates /etc/NetworkManager/system-connections/default_connection.nmconnection with
[ipv4]
address1=10.20.35.31/24,10.20.35.1
dhcp-hostname=okd-worker-1
dns=10.20.0.11;
dns-search=

Works fine, even though no dns-search/domain is set so short name resolution does not work. And interface=ens192 may be a good idea if using multiple interfaces.

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 23, 2020
@openshift-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci-robot openshift-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 22, 2021
@openshift-bot
Copy link
Contributor

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci-robot
Copy link
Contributor

@openshift-bot: Closed this PR.

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants