Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cluster-sync fails due to missing CRDs #961

Open
dharmit opened this issue Feb 13, 2023 · 13 comments
Open

cluster-sync fails due to missing CRDs #961

dharmit opened this issue Feb 13, 2023 · 13 comments
Assignees
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@dharmit
Copy link
Contributor

dharmit commented Feb 13, 2023

I'm following this guide. I don't really have any changes to test, but I'm running it more to get an idea of how to test things locally.

I tried both 1.25 and 1.26, but I'm hitting the same error when I do make cluster-sync from the kubevirt directory:

+ /home/dshah/kubevirt/_out/cmd/dump/dump --kubeconfig=/home/dshah/kubevirtci/_ci-configs/k8s-1.25/.kubeconfig
failed to fetch vmis: the server could not find the requested resource (get virtualmachineinstances.kubevirt.io)
failed to fetch vmims: the server could not find the requested resource (get virtualmachineinstancemigrations.kubevirt.io)
dump network-attachment-definitions: the server could not find the requested resource
failed to fetch kubevirts: the server could not find the requested resource (get kubevirts.kubevirt.io)
failed to fetch vms: the server could not find the requested resource (get virtualmachines.kubevirt.io)
failed to fetch vmsnapshots: the server could not find the requested resource (get virtualmachinesnapshots.snapshot.kubevirt.io)
failed to fetch vmrestores: the server could not find the requested resource (get virtualmachinerestores.snapshot.kubevirt.io)
failed to fetch vm exports: the server could not find the requested resource (get virtualmachineexports.export.kubevirt.io)
vmi list is empty, skipping logDomainXMLs
failed to get vmis from namespace : the server could not find the requested resource (get virtualmachineinstances.kubevirt.io)
failed to get vmis from namespace : the server could not find the requested resource (get virtualmachineinstances.kubevirt.io)
make: *** [Makefile:158: cluster-sync] Error 1

The steps I followed are as below:

# clone the kubevirtci repo and export its path as $KUBEVIRTCI_DIR
# clone the kubevirt repo and export its path as $KUBEVIRT_DIR

$ cd $KUBEVIRTCI_DIR/cluster-provision/k8s/1.25
$ ../provision.sh    # this ends successfully

$ export KUBEVIRTCI_PROVISION_CHECK=1
$ export KUBEVIRTCI_GOCLI_CONTAINER=quay.io/kubevirtci/gocli:latest
$ export KUBEVIRT_PROVIDER=k8s-1.25
$ export KUBECONFIG=$(./cluster-up/kubeconfig.sh)
$ export KUBEVIRT_NUM_NODES=2
$ make cluster-up    # this ends successfully

$ rsync -av $KUBEVIRTCI_DIR/_ci-configs/ $KUBEVIRT_DIR/_ci-configs
$ cd $KUBEVIRT_DIR
$ make cluster-sync

Am I doing something wrong?

Below are some outputs from the cluster. There doesn't seem to be any kubevirt related pod running on the cluster:

$ ./cluster-up/kubectl.sh get pods -A
selecting docker as container runtime
NAMESPACE     NAME                                    READY   STATUS    RESTARTS        AGE
default       local-volume-provisioner-v68w9          1/1     Running   0               27m
default       local-volume-provisioner-vbfqp          1/1     Running   0               28m
kube-system   calico-kube-controllers-8fdc956-rvsrq   1/1     Running   0               28m
kube-system   calico-node-8khxc                       1/1     Running   0               28m
kube-system   calico-node-kgckh                       1/1     Running   0               27m
kube-system   coredns-6d6f78d859-2lm6w                1/1     Running   0               28m
kube-system   coredns-6d6f78d859-9l9cp                1/1     Running   0               28m
kube-system   etcd-node01                             1/1     Running   1               28m
kube-system   kube-apiserver-node01                   1/1     Running   1               28m
kube-system   kube-controller-manager-node01          1/1     Running   2 (5m45s ago)   28m
kube-system   kube-proxy-7gk5h                        1/1     Running   0               28m
kube-system   kube-proxy-ldhkb                        1/1     Running   0               27m
kube-system   kube-scheduler-node01                   1/1     Running   2 (5m45s ago)   28m
kubevirt      disks-images-provider-nc4pf             1/1     Running   0               7m7s
kubevirt      disks-images-provider-r2k88             1/1     Running   0               7m7s

$ ./cluster-up/kubectl.sh get deployments -A
selecting docker as container runtime
NAMESPACE     NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   calico-kube-controllers   1/1     1            1           34m
kube-system   coredns                   2/2     2            2           34m

$ ./cluster-up/kubectl.sh get namespaces
selecting docker as container runtime
NAME              STATUS   AGE
default           Active   34m
kube-node-lease   Active   34m
kube-public       Active   34m
kube-system       Active   34m
kubevirt          Active   13m

$ ./cluster-up/kubectl.sh get all -n kubevirt
selecting docker as container runtime
NAME                              READY   STATUS    RESTARTS   AGE
pod/disks-images-provider-nc4pf   1/1     Running   0          13m
pod/disks-images-provider-r2k88   1/1     Running   0          13m

NAME                                   DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/disks-images-provider   2         2         2       2            2           <none>          13m
@brianmcarey
Copy link
Member

Hi @dharmit - is there any earlier error in make cluster-sync - dump is generally run when something in the build has failed. The dump fails because the kubevirt resources haven't been successfully deployed.

@brianmcarey
Copy link
Member

/cc @brianmcarey

@brianmcarey brianmcarey self-assigned this Mar 3, 2023
@oshoval
Copy link
Contributor

oshoval commented Mar 12, 2023

Hi
Please start with using the already created providers (https://github.com/kubevirt/kubevirtci/blob/main/K8S.md),
this method you described is for creating a new provider, it is for development
and more tricky.
It is better to start with the ones that are already created.

Note that once you cd $KUBEVIRT_DIR, after the rsync, you need to rerun from $KUBEVIRT_DIR
export KUBECONFIG=$(./cluster-up/kubeconfig.sh)
As Brian said, the error was part of failed cluster-sync
best to have the full log in those cases please.

@kubevirt-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@kubevirt-bot kubevirt-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 17, 2023
@victortoso
Copy link
Member

victortoso commented Jun 17, 2023

I'm also hitting this. I'm trying to customize the virtual node to have some usb devices, what @xpivarc did in #996
After generating my own quay.io/kubevirtci/k8s-1.26-centos9:latest and I do can see the emulated usb devices after ssh'ing to the node, I can't actually deploy my kubevirt branch with make cluster-sync with similar log failures mentioned above.

@victortoso
Copy link
Member

The log from make cluster-sync start from deployment where some warnings+dump appears

Deploying ...
+ _kubectl apply -f -
+ export KUBECONFIG=/home/toso/src/kubevirt/kubevirt/_ci-configs/k8s-1.26-centos9/.kubeconfig
+ KUBECONFIG=/home/toso/src/kubevirt/kubevirt/_ci-configs/k8s-1.26-centos9/.kubeconfig
+ /home/toso/src/kubevirt/kubevirt/_ci-configs/k8s-1.26-centos9/.kubectl apply -f -
namespace/kubevirt created
+ [[ k8s-1.26-centos9 =~ kind.* ]]
+ [[ k8s-1.26-centos9 = \e\x\t\e\r\n\a\l ]]
+ _deploy_infra_for_tests
+ [[ true == \f\a\l\s\e ]]
+ _kubectl create -f /home/toso/src/kubevirt/kubevirt/_out/manifests/testing
+ export KUBECONFIG=/home/toso/src/kubevirt/kubevirt/_ci-configs/k8s-1.26-centos9/.kubeconfig
+ KUBECONFIG=/home/toso/src/kubevirt/kubevirt/_ci-configs/k8s-1.26-centos9/.kubeconfig
+ /home/toso/src/kubevirt/kubevirt/_ci-configs/k8s-1.26-centos9/.kubectl create -f /home/toso/src/kubevirt/kubevirt/_out/manifests/testing
Warning: would violate PodSecurity "restricted:latest": privileged (container "target" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "target" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "target" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "images", "local-storage", "host-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or container "target" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "target" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
daemonset.apps/disks-images-provider created
serviceaccount/kubevirt-testing created
clusterrolebinding.rbac.authorization.k8s.io/kubevirt-testing-cluster-admin created
Error from server (NotFound): error when creating "/home/toso/src/kubevirt/kubevirt/_out/manifests/testing/uploadproxy-nodeport.yaml": namespaces "cdi" not found
+ dump_kubevirt
+ '[' 1 -ne 0 ']'
+ echo 'Dump kubevirt state:'
Dump kubevirt state:
+ hack/dump.sh
+ export ARTIFACTS=_out/artifacts
+ ARTIFACTS=_out/artifacts
+ source hack/common.sh
++ '[' -f cluster-up/hack/common.sh ']'
++ source cluster-up/hack/common.sh
+++ '[' -z '' ']'
+++++ dirname 'cluster-up/hack/common.sh[0]'
++++ cd cluster-up/hack/../
+++++ pwd
++++ echo /home/toso/src/kubevirt/kubevirt/cluster-up/
+++ KUBEVIRTCI_PATH=/home/toso/src/kubevirt/kubevirt/cluster-up/
+++ '[' -z '' ']'
+++++ dirname 'cluster-up/hack/common.sh[0]'
++++ cd cluster-up/hack/../../
+++++ pwd
++++ echo /home/toso/src/kubevirt/kubevirt/_ci-configs
+++ KUBEVIRTCI_CONFIG_PATH=/home/toso/src/kubevirt/kubevirt/_ci-configs
+++ KUBEVIRTCI_CLUSTER_PATH=/home/toso/src/kubevirt/kubevirt/cluster-up//cluster
+++ KUBEVIRT_PROVIDER=k8s-1.26-centos9
+++ KUBEVIRT_NUM_NODES=1
+++ KUBEVIRT_MEMORY_SIZE=5120M
+++ KUBEVIRT_NUM_SECONDARY_NICS=0
+++ KUBEVIRT_DEPLOY_ISTIO=false
+++ KUBEVIRT_PSA=false
+++ KUBEVIRT_SINGLE_STACK=false
+++ KUBEVIRT_ENABLE_AUDIT=false
+++ KUBEVIRT_DEPLOY_NFS_CSI=false
+++ KUBEVIRT_DEPLOY_PROMETHEUS=false
+++ KUBEVIRT_DEPLOY_PROMETHEUS_ALERTMANAGER=false
+++ KUBEVIRT_DEPLOY_GRAFANA=false
+++ KUBEVIRT_CGROUPV2=false
+++ KUBEVIRT_DEPLOY_CDI=false
+++ KUBEVIRT_DEPLOY_CDI_LATEST=false
+++ KUBEVIRT_SWAP_ON=false
+++ KUBEVIRT_KSM_ON=false
+++ KUBEVIRT_UNLIMITEDSWAP=false
+++ '[' -z '' ']'
+++ KUBEVIRT_PROVIDER_EXTRA_ARGS=' --ocp-port 8443'
+++ provider_prefix=k8s-1.26-centos9
+++ job_prefix=kubevirt
+++ mkdir -p /home/toso/src/kubevirt/kubevirt/_ci-configs/k8s-1.26-centos9
++ export 'GOFLAGS= -mod=vendor -mod=vendor'
++ GOFLAGS=' -mod=vendor -mod=vendor'
++++ dirname 'hack/common.sh[0]'
+++ cd hack/../
+++ pwd
++ KUBEVIRT_DIR=/home/toso/src/kubevirt/kubevirt
++ OUT_DIR=/home/toso/src/kubevirt/kubevirt/_out
++ SANDBOX_DIR=/home/toso/src/kubevirt/kubevirt/.bazeldnf/sandbox
++ VENDOR_DIR=/home/toso/src/kubevirt/kubevirt/vendor
++ CMD_OUT_DIR=/home/toso/src/kubevirt/kubevirt/_out/cmd
++ TESTS_OUT_DIR=/home/toso/src/kubevirt/kubevirt/_out/tests
++ APIDOCS_OUT_DIR=/home/toso/src/kubevirt/kubevirt/_out/apidocs
++ ARTIFACTS=_out/artifacts
++ DIGESTS_DIR=/home/toso/src/kubevirt/kubevirt/_out/digests
++ MANIFESTS_OUT_DIR=/home/toso/src/kubevirt/kubevirt/_out/manifests
++ MANIFEST_TEMPLATES_OUT_DIR=/home/toso/src/kubevirt/kubevirt/_out/templates/manifests
++ PYTHON_CLIENT_OUT_DIR=/home/toso/src/kubevirt/kubevirt/_out/client-python
+++ uname -m
++ ARCHITECTURE=x86_64
+++ uname -m
++ HOST_ARCHITECTURE=x86_64
++ KUBEVIRT_NO_BAZEL=false
++ KUBEVIRT_RELEASE=false
++ OPERATOR_MANIFEST_PATH=/home/toso/src/kubevirt/kubevirt/_out/manifests/release/kubevirt-operator.yaml
++ TESTING_MANIFEST_PATH=/home/toso/src/kubevirt/kubevirt/_out/manifests/testing
+++ determine_cri_bin
+++ '[' '' = podman ']'
+++ '[' '' = docker ']'
+++ podman ps
+++ docker ps
+++ echo docker
++ KUBEVIRT_CRI=docker
++ '[' -z '' ']'
++ KUBEVIRT_GO_BUILD_TAGS=selinux
+++ kubevirt_version
+++ '[' -n '' ']'
+++ '[' -d /home/toso/src/kubevirt/kubevirt/.git ']'
++++ git describe --always --tags
+++ echo v1.0.0-rc.0-26-g7c219338e
++ KUBEVIRT_VERSION=v1.0.0-rc.0-26-g7c219338e
++ DOCKER_CA_CERT_FILE=
++ DOCKERIZED_CUSTOM_CA_PATH=/etc/pki/ca-trust/source/anchors/custom-ca.crt
+ source hack/config.sh
++ unset binaries docker_images docker_tag docker_tag_alt image_prefix image_prefix_alt manifest_templates namespace image_pull_policy verbosity csv_version package_name
++ source hack/config-default.sh
+++ binaries='cmd/virt-operator cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/example-hook-sidecar cmd/example-cloudinit-hook-sidecar cmd/virt-chroot'
+++ docker_images='cmd/virt-operator cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer images/nfs-server images/winrmcli cmd/example-hook-sidecar cmd/example-cloudinit-hook-sidecar tests/conformance'
+++ docker_tag=latest
+++ docker_tag_alt=
+++ image_prefix=
+++ image_prefix_alt=
+++ namespace=kubevirt
+++ deploy_testing_infra=false
+++ csv_namespace=placeholder
+++ cdi_namespace=cdi
+++ image_pull_policy=Always
+++ verbosity=2
+++ package_name=kubevirt-dev
+++ kubevirtci_git_hash=2306070036-c75814e
+++ conn_check_ipv4_address=
+++ conn_check_ipv6_address=
+++ conn_check_dns=
+++ migration_network_nic=eth1
+++ infra_replicas=0
+++ default_csv_version=0.0.0
+++ default_csv_version=0.0.0
+++ [[ 0.0.0 == v* ]]
+++ csv_version=0.0.0
++ source cluster-up/hack/config.sh
+++ unset docker_prefix master_ip network_provider kubeconfig manifest_docker_prefix
+++ KUBEVIRT_PROVIDER=k8s-1.26-centos9
+++ source /home/toso/src/kubevirt/kubevirt/cluster-up/hack/config-default.sh
++++ docker_prefix=kubevirt
++++ master_ip=192.168.200.2
++++ network_provider=flannel
+++ test -f /home/toso/src/kubevirt/kubevirt/_ci-configs/k8s-1.26-centos9/config-provider-k8s-1.26-centos9.sh
+++ source /home/toso/src/kubevirt/kubevirt/_ci-configs/k8s-1.26-centos9/config-provider-k8s-1.26-centos9.sh
++++ master_ip=127.0.0.1
++++ kubeconfig=/home/toso/src/kubevirt/kubevirtci/_ci-configs/k8s-1.26-centos9/.kubeconfig
++++ kubectl=/home/toso/src/kubevirt/kubevirtci/_ci-configs/k8s-1.26-centos9/.kubectl
++++ gocli=/home/toso/src/kubevirt/kubevirtci/_ci-configs/../cluster-up/cli.sh
++++ docker_prefix=localhost:32894/kubevirt
++++ manifest_docker_prefix=registry:5000/kubevirt
+++ export docker_prefix master_ip network_provider kubeconfig manifest_docker_prefix
++ export binaries docker_images docker_tag docker_tag_alt image_prefix image_prefix_alt manifest_templates namespace image_pull_policy verbosity csv_version package_name
+ /home/toso/src/kubevirt/kubevirt/_out/cmd/dump/dump --kubeconfig=/home/toso/src/kubevirt/kubevirtci/_ci-configs/k8s-1.26-centos9/.kubeconfig
failed to fetch vmis: the server could not find the requested resource (get virtualmachineinstances.kubevirt.io)
failed to fetch vmims: the server could not find the requested resource (get virtualmachineinstancemigrations.kubevirt.io)
dump network-attachment-definitions: the server could not find the requested resource
failed to fetch kubevirts: the server could not find the requested resource (get kubevirts.kubevirt.io)
failed to fetch vms: the server could not find the requested resource (get virtualmachines.kubevirt.io)
failed to fetch vmsnapshots: the server could not find the requested resource (get virtualmachinesnapshots.snapshot.kubevirt.io)
failed to fetch vmrestores: the server could not find the requested resource (get virtualmachinerestores.snapshot.kubevirt.io)
failed to fetch vm exports: the server could not find the requested resource (get virtualmachineexports.export.kubevirt.io)
vmi list is empty, skipping logDomainXMLs
failed to get vmis from namespace : the server could not find the requested resource (get virtualmachineinstances.kubevirt.io)
failed to get vmis from namespace : the server could not find the requested resource (get virtualmachineinstances.kubevirt.io)
failed to fetch vm exports: the server could not find the requested resource (get virtualmachinepools.pool.kubevirt.io)
make: *** [Makefile:159: cluster-sync] Error 1

@victortoso
Copy link
Member

After the hint

...
namespaces "cdi" not found
....
+++ KUBEVIRT_DEPLOY_CDI=false
...

I've set KUBEVIRT_DEPLOY_CDI=true before running make cluster-up and now make cluster-sync works fine. Should be fine to update the documentation, no?

@oshoval
Copy link
Contributor

oshoval commented Jun 18, 2023

CDI is optional, it should work without, it seems like a regression,
the right thing to do is that we fix it

Thanks

@kubevirt-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten

@kubevirt-bot kubevirt-bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 18, 2023
@kubevirt-bot
Copy link
Contributor

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

/close

@kubevirt-bot
Copy link
Contributor

@kubevirt-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@dhiller
Copy link
Contributor

dhiller commented Aug 17, 2023

/remove-lifecycle rotten
/lifecycle frozen
/reopen

@kubevirt-bot
Copy link
Contributor

@dhiller: Reopened this issue.

In response to this:

/remove-lifecycle rotten
/lifecycle frozen
/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@kubevirt-bot kubevirt-bot reopened this Aug 17, 2023
@kubevirt-bot kubevirt-bot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Aug 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests

6 participants