-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revert: Move waiting for tasks to separate phase to unblock process #1265
Conversation
This reverts commit 4208890. Signed-off-by: Martin Necas <mnecas@redhat.com>
Quality Gate passedIssues Measures |
Codecov ReportAttention: Patch coverage is
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## main #1265 +/- ##
==========================================
+ Coverage 15.54% 15.57% +0.03%
==========================================
Files 112 112
Lines 23307 23262 -45
==========================================
Hits 3624 3624
+ Misses 19396 19351 -45
Partials 287 287
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Add a wait phase for snapshot tasks Issue: The main problem of the MTV-1753 and MTV-1775 is that we are either not waiting for the VMware task to finish or if we are waiting we are halting the whole controller process. This causes either performance issues or even migration failures. So we need to add a mechanism to wait for the tasks without halting the whole process. Fix: My first attempt was in PR kubev2v#1262 which used the event manager. This on the surface was an easy approach which did not require any additional changes to the CR. The problem there was that some of the tasks were not reported to the taskManager. These tasks had a prefix haTask. After some investigation, I found out that these tasks are directly on the ESXi host and not sent to the vspehre, so we can't use the taskManager. This PR adds the taskIds to the status CR so additional wait phases can monitor the tasks. The main controller will get the ESXi client and create a property collector to request the specific task from the host. Ref: - https://issues.redhat.com/browse/MTV-1753 - https://issues.redhat.com/browse/MTV-1775 - kubev2v#1262 - kubev2v#1265 Signed-off-by: Martin Necas <mnecas@redhat.com>
Issue: The main problem of the MTV-1753 and MTV-1775 is that we are either not waiting for the VMware task to finish or if we are waiting we are halting the whole controller process. This causes either performance issues or even migration failures. So we need to add a mechanism to wait for the tasks without halting the whole process. Fix: My first attempt was in PR kubev2v#1262 which used the event manager. This on the surface was an easy approach which did not require any additional changes to the CR. The problem there was that some of the tasks were not reported to the taskManager. These tasks had a prefix haTask. After some investigation, I found out that these tasks are directly on the ESXi host and not sent to the vspehre, so we can't use the taskManager. This PR adds the taskIds to the status CR so additional wait phases can monitor the tasks. The main controller will get the ESXi client and create a property collector to request the specific task from the host. Ref: - https://issues.redhat.com/browse/MTV-1753 - https://issues.redhat.com/browse/MTV-1775 - kubev2v#1262 - kubev2v#1265 Signed-off-by: Martin Necas <mnecas@redhat.com>
Issue: The main problem of the MTV-1753 and MTV-1775 is that we are either not waiting for the VMware task to finish or if we are waiting we are halting the whole controller process. This causes either performance issues or even migration failures. So we need to add a mechanism to wait for the tasks without halting the whole process. Fix: My first attempt was in PR kubev2v#1262 which used the event manager. This on the surface was an easy approach which did not require any additional changes to the CR. The problem there was that some of the tasks were not reported to the taskManager. These tasks had a prefix haTask. After some investigation, I found out that these tasks are directly on the ESXi host and not sent to the vspehre, so we can't use the taskManager. This PR adds the taskIds to the status CR so additional wait phases can monitor the tasks. The main controller will get the ESXi client and create a property collector to request the specific task from the host. Ref: - https://issues.redhat.com/browse/MTV-1753 - https://issues.redhat.com/browse/MTV-1775 - kubev2v#1262 - kubev2v#1265 Signed-off-by: Martin Necas <mnecas@redhat.com>
Issue: The main problem of the MTV-1753 and MTV-1775 is that we are either not waiting for the VMware task to finish or if we are waiting we are halting the whole controller process. This causes either performance issues or even migration failures. So we need to add a mechanism to wait for the tasks without halting the whole process. Fix: My first attempt was in PR #1262 which used the event manager. This on the surface was an easy approach which did not require any additional changes to the CR. The problem there was that some of the tasks were not reported to the taskManager. These tasks had a prefix haTask. After some investigation, I found out that these tasks are directly on the ESXi host and not sent to the vspehre, so we can't use the taskManager. This PR adds the taskIds to the status CR so additional wait phases can monitor the tasks. The main controller will get the ESXi client and create a property collector to request the specific task from the host. Ref: - https://issues.redhat.com/browse/MTV-1753 - https://issues.redhat.com/browse/MTV-1775 - #1262 - #1265 Signed-off-by: Martin Necas <mnecas@redhat.com>
Issue: The main problem of the MTV-1753 and MTV-1775 is that we are either not waiting for the VMware task to finish or if we are waiting we are halting the whole controller process. This causes either performance issues or even migration failures. So we need to add a mechanism to wait for the tasks without halting the whole process. Fix: My first attempt was in PR kubev2v#1262 which used the event manager. This on the surface was an easy approach which did not require any additional changes to the CR. The problem there was that some of the tasks were not reported to the taskManager. These tasks had a prefix haTask. After some investigation, I found out that these tasks are directly on the ESXi host and not sent to the vspehre, so we can't use the taskManager. This PR adds the taskIds to the status CR so additional wait phases can monitor the tasks. The main controller will get the ESXi client and create a property collector to request the specific task from the host. Ref: - https://issues.redhat.com/browse/MTV-1753 - https://issues.redhat.com/browse/MTV-1775 - kubev2v#1262 - kubev2v#1265 Signed-off-by: Martin Necas <mnecas@redhat.com>
Issue: The main problem of the MTV-1753 and MTV-1775 is that we are either not waiting for the VMware task to finish or if we are waiting we are halting the whole controller process. This causes either performance issues or even migration failures. So we need to add a mechanism to wait for the tasks without halting the whole process. Fix: My first attempt was in PR #1262 which used the event manager. This on the surface was an easy approach which did not require any additional changes to the CR. The problem there was that some of the tasks were not reported to the taskManager. These tasks had a prefix haTask. After some investigation, I found out that these tasks are directly on the ESXi host and not sent to the vspehre, so we can't use the taskManager. This PR adds the taskIds to the status CR so additional wait phases can monitor the tasks. The main controller will get the ESXi client and create a property collector to request the specific task from the host. Ref: - https://issues.redhat.com/browse/MTV-1753 - https://issues.redhat.com/browse/MTV-1775 - #1262 - #1265 Signed-off-by: Martin Necas <mnecas@redhat.com>
Revert of: #1262
Reverting due to deeper testing on larger scale, this solution does not support ha tasks.
haTask-2330-vim.VirtualMachine.createSnapshot-2994898