Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: get node name from guest agent #38

Merged
merged 2 commits into from
Feb 29, 2024

Conversation

FrankYang0529
Copy link
Member

@FrankYang0529 FrankYang0529 commented Jan 24, 2024

harvester/harvester#4947

Test steps

  1. Create a harvester with feat(rbac): add guestosinfo to harvesterhci.io:cloudprovider harvester#5019.
  2. Create VMImage and VM Network.
  3. Import the harvester to a Rancher cluster.
  4. Use harvester kubeconfig to run deploy/generate_addon.sh. The cluster_name is guest cluster name which we will create later. The namespace is guest cluster VM namespace.
./deploy/generate_addon <cluster_name> <namespace>
  1. Set a RKE2 with External cloud provider.
Screenshot 2024-01-24 at 2 32 53 PM
  1. Copy write_files from the step 4 to User Data. Also, setting a hostname.
Screenshot 2024-01-24 at 2 40 47 PM
  1. Create the RKE2 cluster and wait for message waiting for cluster agent to connect.
Screenshot 2024-01-24 at 2 33 15 PM
  1. Use ssh to login the VM and install helm.
  2. In the VM, set kubeconfig.
sudo chmod 666 /etc/rancher/rke2/rke2.yaml
export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
  1. In the VM, clone the harvester charts and install harvester-cloud-provider.
git clone https://github.com/harvester/charts
cd charts/charts/harvester-cloud-provider/
helm install harvester-cloud-provider . --namespace kube-system \
  --set image.repository=frankyang/harvester-cloud-provider \
  --set image.tag=447fef20-amd64 \
  --set global.cattle.clusterName=<cluster_name>
  1. Check the RKE2 cluster can be ready.

Signed-off-by: PoAn Yang <poan.yang@suse.com>
if _, err := i.vmClient.Get(i.namespace, node.Name, metav1.GetOptions{}); err != nil && !errors.IsNotFound(err) {
return false, err
} else if errors.IsNotFound(err) {
if _, err := i.getVM(node); err != nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The original method was using vmClient.Get to get data from remote instead of from cache; now the getVM is checking data from sync.Map, it will have such race:
when the sync.Map is not synced fully in OnVmiChanged; the InstanceExists will not work as expected.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch. Reverted vmCache/vmiCache to vmClient/vmiClient. Thanks.

Copy link
Member

@w13915984028 w13915984028 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another question about the map, please double check, thanks.

"hostname": guestAgentInfo.Hostname,
}).Info("get agent info success, using hostname as node name")
nodeName = guestAgentInfo.Hostname
h.nodeToVMName.Store(nodeName, vmi.Name)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The vmi.Namespace is not stored in the map, if VMs from different namespace have same name, will them overwrite each other?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the better way is to drop events from other namespaces because the cloud provider is running in the guest cluster. It only needs to care VMI in that namespace. We can get the namespace like instance manager. WDYT?

cp.instances = &instanceManager{
vmClient: cp.kubevirtFactory.Kubevirt().V1().VirtualMachine(),
vmiClient: cp.kubevirtFactory.Kubevirt().V1().VirtualMachineInstance(),
namespace: namespace,
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is reasonable to drop other namespaces.

Signed-off-by: PoAn Yang <poan.yang@suse.com>
Copy link
Member

@w13915984028 w13915984028 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks.

Copy link
Member

@bk201 bk201 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm.
if the change is not compatible with Harvester v1.2.x, be sure to bump the minor version when doing a new tag. (like from v0.2.0 -> v0.3.0).

@FrankYang0529 FrankYang0529 merged commit d64b2a0 into harvester:master Feb 29, 2024
4 checks passed
@FrankYang0529 FrankYang0529 deleted the HARV-4947 branch February 29, 2024 06:14
@FrankYang0529
Copy link
Member Author

lgtm. if the change is not compatible with Harvester v1.2.x, be sure to bump the minor version when doing a new tag. (like from v0.2.0 -> v0.3.0).

Hi @w13915984028, I think this is not a breaking change, so I will create a new tag v0.2.1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants