-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nomad 1.9.3: panic: runtime error: slice bounds out of range [:12] with capacity 0 #24441
Comments
FYI, node rollback to 1.9.1 yields the exact same issue as above:
|
Hi @HINT-SJ and thanks for raising this issue and sorry you've hit yet another class of this bug. I have already raised a linked PR to fix this issue along with additional spot-checks to ensure this pattern is not elsewhere. I'll work with the rest of the team to get this merged and look to add some additional tests in this area in the future. |
Thanks for your continuous work :) Fingers crossed! |
@HINT-SJ my pleasure. I also had to do a double take on this :D |
Guys, is there any timeline on when this fix will be shipped? Our cluster(s) are running thin up to the point where some can't elect a leader anymore. Since we can't add new server, we're caught between a rock and a hard place! I am not a paying customer and I'm fully aware that I'm using an open core product! Having that said, can somebody share a roadmap on this? Thanks |
Thanks @blalor, we did end up doing exactly that... Plus some lessons learned along the way. |
Nomad version
Operating system and Environment details
Amazon Linux 2023 (minimal)
AWS EC2 Gravitron (t4g.small)
Issue
At an attempt to upgrade from Nomad 1.9.1 to 1.9.3 (skipping 1.9.2) the first server node we updated failed to start:
Maybe related to issue #24379 and #24411
We actually rollout new EC2 instances (so no old data left on system) if that helps
Reproduction steps
Update 1.9.1 cluster to 1.9.3 ^^
Cluster has been up for several years and major versions.
Expected Result
Successfully starting a new server node with new version :)
The text was updated successfully, but these errors were encountered: