-
Notifications
You must be signed in to change notification settings - Fork 146
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
nopreempt
doesn't seem to work for keepalived 1.4.5
#20
Comments
Hello, This would be nice to open a proper issue on keepalived repo too. |
@BertrandGouny
You can continue discussion over there. |
Thanks, Does this also occurs if the container is run This may be related to a larger problem i'm also facing with keepalived 2.x, it can't find network interface in container :s this is also reported in 1.4.5 when keepalived start but vrrp managed to use the interface after a short period of time. Not sure what is happening :/ |
keepalived 1.3.5 also has this problem. **!server1, 192.168.1.222 config_**_global_defs { vrrp_script chk_myscript { vrrp_instance VI_1 {
} **!server2, 192.168.1.223 config_**_global_defs { vrrp_script chk_myscript { vrrp_instance VI_1 {
} systemd config in both server[Unit] reboot server 1:log in server 1 reboot server 2:log in server 1 |
i have to follow steps to prohibit the VIP floating.
|
Problem
nopreempt
works great when docker service stops/restarts; when my network interface goes down; and when I restart the keepalived container, but when I restart the machine with 51 priority, it takes back the control from the other node(it preempts). Following the discussion here, I added a 60s delay before startup of the keepalived service inside my container (in process.sh) but it still preempts the node with lower priority after a minute. What could possibly be wrong here? Obviously it isn't the network because it doesn't take that long to initialize. This is a clone of this issue.Configuration
My configuration file looks like below:
Logs
I also tried to manually start the container after some time upon reboot and it still preempts the lower priority node. I'm getting following logs after rebooting higher priority node:
tcpdump
I got the
tcpdump
at the reboot time of the higher priority node. Machine 1.89 has 51 priority and 1.141 has 50 priority (on which I'm dumping) with the above-mentioned configuration.In this dump, the machine with priority 51 (1.89) goes down at 13:13:15 and comes alive again at 13:13:37. Keepalived is started after 5 minutes delay and the preemption occurs. You can see the preemption happening at 13:18:52. Let me know if any further information is required to point out the issue.
The text was updated successfully, but these errors were encountered: