Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revert: Move waiting for tasks to separate phase to unblock process #1265

Merged
merged 1 commit into from
Dec 12, 2024

Conversation

mnecas
Copy link
Member

@mnecas mnecas commented Dec 12, 2024

Revert of: #1262

Reverting due to deeper testing on larger scale, this solution does not support ha tasks.
haTask-2330-vim.VirtualMachine.createSnapshot-2994898

This reverts commit 4208890.

Signed-off-by: Martin Necas <mnecas@redhat.com>
Copy link

@codecov-commenter
Copy link

⚠️ Please install the 'codecov app svg image' to ensure uploads and comments are reliably processed by Codecov.

Codecov Report

Attention: Patch coverage is 0% with 33 lines in your changes missing coverage. Please review.

Project coverage is 15.57%. Comparing base (59c31b0) to head (50cb254).

Files with missing lines Patch % Lines
pkg/controller/plan/migration.go 0.00% 18 Missing ⚠️
pkg/controller/plan/adapter/vsphere/client.go 0.00% 12 Missing ⚠️
pkg/controller/plan/adapter/openstack/client.go 0.00% 1 Missing ⚠️
pkg/controller/plan/adapter/ova/client.go 0.00% 1 Missing ⚠️
pkg/controller/plan/adapter/ovirt/client.go 0.00% 1 Missing ⚠️

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1265      +/-   ##
==========================================
+ Coverage   15.54%   15.57%   +0.03%     
==========================================
  Files         112      112              
  Lines       23307    23262      -45     
==========================================
  Hits         3624     3624              
+ Misses      19396    19351      -45     
  Partials      287      287              
Flag Coverage Δ
unittests 15.57% <0.00%> (+0.03%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@mnecas mnecas merged commit 3929eaf into kubev2v:main Dec 12, 2024
18 of 19 checks passed
mnecas added a commit to mnecas/forklift that referenced this pull request Dec 13, 2024
Add a wait phase for snapshot tasks

Issue:
The main problem of the MTV-1753 and MTV-1775 is that we are either not
waiting for the VMware task to finish or if we are waiting we are halting
the whole controller process. This causes either performance issues or
even migration failures. So we need to add a mechanism to wait for the
tasks without halting the whole process.

Fix:
My first attempt was in PR kubev2v#1262 which used the event manager. This on
the surface was an easy approach which did not require any additional changes
to the CR. The problem there was that some of the tasks were not
reported to the taskManager. These tasks had a prefix haTask. After some
investigation, I found out that these tasks are directly on the ESXi host
and not sent to the vspehre, so we can't use the taskManager.

This PR adds the taskIds to the status CR so additional wait phases can
monitor the tasks. The main controller will get the ESXi client and
create a property collector to request the specific task from the host.

Ref:
- https://issues.redhat.com/browse/MTV-1753
- https://issues.redhat.com/browse/MTV-1775
- kubev2v#1262
- kubev2v#1265

Signed-off-by: Martin Necas <mnecas@redhat.com>
mnecas added a commit to mnecas/forklift that referenced this pull request Dec 13, 2024
Issue:
The main problem of the MTV-1753 and MTV-1775 is that we are either not
waiting for the VMware task to finish or if we are waiting we are halting
the whole controller process. This causes either performance issues or
even migration failures. So we need to add a mechanism to wait for the
tasks without halting the whole process.

Fix:
My first attempt was in PR kubev2v#1262 which used the event manager. This on
the surface was an easy approach which did not require any additional changes
to the CR. The problem there was that some of the tasks were not
reported to the taskManager. These tasks had a prefix haTask. After some
investigation, I found out that these tasks are directly on the ESXi host
and not sent to the vspehre, so we can't use the taskManager.

This PR adds the taskIds to the status CR so additional wait phases can
monitor the tasks. The main controller will get the ESXi client and
create a property collector to request the specific task from the host.

Ref:
- https://issues.redhat.com/browse/MTV-1753
- https://issues.redhat.com/browse/MTV-1775
- kubev2v#1262
- kubev2v#1265

Signed-off-by: Martin Necas <mnecas@redhat.com>
mnecas added a commit to mnecas/forklift that referenced this pull request Dec 13, 2024
Issue:
The main problem of the MTV-1753 and MTV-1775 is that we are either not
waiting for the VMware task to finish or if we are waiting we are halting
the whole controller process. This causes either performance issues or
even migration failures. So we need to add a mechanism to wait for the
tasks without halting the whole process.

Fix:
My first attempt was in PR kubev2v#1262 which used the event manager. This on
the surface was an easy approach which did not require any additional changes
to the CR. The problem there was that some of the tasks were not
reported to the taskManager. These tasks had a prefix haTask. After some
investigation, I found out that these tasks are directly on the ESXi host
and not sent to the vspehre, so we can't use the taskManager.

This PR adds the taskIds to the status CR so additional wait phases can
monitor the tasks. The main controller will get the ESXi client and
create a property collector to request the specific task from the host.

Ref:
- https://issues.redhat.com/browse/MTV-1753
- https://issues.redhat.com/browse/MTV-1775
- kubev2v#1262
- kubev2v#1265

Signed-off-by: Martin Necas <mnecas@redhat.com>
mnecas added a commit that referenced this pull request Dec 13, 2024
Issue:
The main problem of the MTV-1753 and MTV-1775 is that we are either not
waiting for the VMware task to finish or if we are waiting we are halting
the whole controller process. This causes either performance issues or
even migration failures. So we need to add a mechanism to wait for the
tasks without halting the whole process.

Fix:
My first attempt was in PR #1262 which used the event manager. This on
the surface was an easy approach which did not require any additional changes
to the CR. The problem there was that some of the tasks were not
reported to the taskManager. These tasks had a prefix haTask. After some
investigation, I found out that these tasks are directly on the ESXi host
and not sent to the vspehre, so we can't use the taskManager.

This PR adds the taskIds to the status CR so additional wait phases can
monitor the tasks. The main controller will get the ESXi client and
create a property collector to request the specific task from the host.

Ref:
- https://issues.redhat.com/browse/MTV-1753
- https://issues.redhat.com/browse/MTV-1775
- #1262
- #1265

Signed-off-by: Martin Necas <mnecas@redhat.com>
@mnecas mnecas added this to the 2.7.7 milestone Dec 13, 2024
mnecas added a commit to mnecas/forklift that referenced this pull request Dec 13, 2024
Issue:
The main problem of the MTV-1753 and MTV-1775 is that we are either not
waiting for the VMware task to finish or if we are waiting we are halting
the whole controller process. This causes either performance issues or
even migration failures. So we need to add a mechanism to wait for the
tasks without halting the whole process.

Fix:
My first attempt was in PR kubev2v#1262 which used the event manager. This on
the surface was an easy approach which did not require any additional changes
to the CR. The problem there was that some of the tasks were not
reported to the taskManager. These tasks had a prefix haTask. After some
investigation, I found out that these tasks are directly on the ESXi host
and not sent to the vspehre, so we can't use the taskManager.

This PR adds the taskIds to the status CR so additional wait phases can
monitor the tasks. The main controller will get the ESXi client and
create a property collector to request the specific task from the host.

Ref:
- https://issues.redhat.com/browse/MTV-1753
- https://issues.redhat.com/browse/MTV-1775
- kubev2v#1262
- kubev2v#1265

Signed-off-by: Martin Necas <mnecas@redhat.com>
@mnecas mnecas mentioned this pull request Dec 13, 2024
mnecas added a commit that referenced this pull request Dec 13, 2024
Issue:
The main problem of the MTV-1753 and MTV-1775 is that we are either not
waiting for the VMware task to finish or if we are waiting we are halting
the whole controller process. This causes either performance issues or
even migration failures. So we need to add a mechanism to wait for the
tasks without halting the whole process.

Fix:
My first attempt was in PR #1262 which used the event manager. This on
the surface was an easy approach which did not require any additional changes
to the CR. The problem there was that some of the tasks were not
reported to the taskManager. These tasks had a prefix haTask. After some
investigation, I found out that these tasks are directly on the ESXi host
and not sent to the vspehre, so we can't use the taskManager.

This PR adds the taskIds to the status CR so additional wait phases can
monitor the tasks. The main controller will get the ESXi client and
create a property collector to request the specific task from the host.

Ref:
- https://issues.redhat.com/browse/MTV-1753
- https://issues.redhat.com/browse/MTV-1775
- #1262
- #1265

Signed-off-by: Martin Necas <mnecas@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants