-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Calico-VPP pod claims IPv6 address of node but uses IPv4 address instead #651
Comments
Did a little more troubleshooting. I have found that before even applying the calico-vpp.yaml file, and applying the base calico.yaml file to get calico instantiated, it crashes when I specify the linuxDataplane as VPP. When checking the logs, I see the following errors:
As for applying the calico-vpp.yaml file, I managed to be able to check the logs before kubectl loses connectivity to the API. The logs are below:
Any help would be greatly appreciated. |
Environment
Issue description
I'm setting up an IPv6 cluster. Each node in the cluster has two interfaces within ESXi. One interface is an ipv4 interface for OOBM, and the other serves as the main interface for kubernetes and is the uplink interface for vpp. Whenever I run "kubectl create -f calico-vpp.yaml", my node loses its IPv6 address (as the documentation states). I would expect this to be hitless if I understand the documentation properly, however anything trying to reach that IP is met with no response. As a result, all kubectl commands stop working since the API was using that address.
I have used nerdctl to exec into the container, and when executing "ip a", the uplink interface I configured shows no IPv6 address...only link local. Surprisingly the IPv4 address and interface is listed in the container, and the node has not lost that IP at all.
Is this a bug or am I doing something wrong?
To Reproduce
Steps to reproduce the behavior:
Expected behavior
calico-vpp pod would successfully be created, and I would be able to maintain ipv6 connectivity
The text was updated successfully, but these errors were encountered: