-
Notifications
You must be signed in to change notification settings - Fork 119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replication resync #6261
Comments
Update: The problem, as you can imagine, is that I had to delete everything on the Pulp instance. Also, there were no repositories from other plugins, but they will probably be affected if you apply this workaround in your environment. |
Is there any chance you had some logging output of the failed sync and the failed reattempts? Is is even reproducibe? |
Sorry, the logs of the kubernetes worker that ran the task are already rotated. The only thing I have is the record of the task in pulp:
The task cancelled is this one:
As you can see, the task was running and, for near 2 hours, was stuck in this state.
There is no other task in the task group and no errors on the logs as far as I can see. I was able to reproduce the issue twice. The first one after the creation of the k8s cluster and the second one after destruction and redeployment. The third one, when it was able to replicate, was after manually delete everything in the pulp cluster as described in the main post. Sorry for the long post and low details. |
No worries. I fear however I cannot deduce more information from it. |
OK, looking into this, I can confirm that the replica optimization logic fails here. |
While investigating the functionality of UpstreamsPulp, I noticed that there is no "destroy" option for it in the Pulp client, but it does exist in the API. I unfortunately discovered the hard way that using it will orphan everything replicated on the server, and you will likely have to destroy the cluster if you want to continue. I'm not sure if this is a known issue or if it could affect your modifications. |
Yes, makeing it an exact clone (deleting everything else in the domain) is part of the design. I just figured, i cannot find any documentation either. |
Version
{
"component": "core",
"version": "3.69.2",
"package": "pulpcore",
"module": "pulpcore.app",
"domain_compatible": true
}
{
"component": "ansible",
"version": "0.23.1",
"package": "pulp-ansible",
"module": "pulp_ansible.app",
"domain_compatible": false
}
{
"component": "container",
"version": "2.22.1",
"package": "pulp-container",
"module": "pulp_container.app",
"domain_compatible": false
}
{
"component": "deb",
"version": "3.5.0",
"package": "pulp_deb",
"module": "pulp_deb.app",
"domain_compatible": false
}
{
"component": "certguard",
"version": "3.69.2",
"package": "pulpcore",
"module": "pulp_certguard.app",
"domain_compatible": true
}
K8S installation with pulp-operator.
Describe the bug
During a replication, the last task got stuck, and I had to cancel it after several hours. But now, it is not trying to replicate that repository again, even though I deleted the repository tree created for it.
To Reproduce
Create a new pulp deployment, create an upstream-pulp (pulp client or api), run replica (pulp client or api). If anything fail, you'll not be able to force a new replica to download again the content.
Expected behavior
To have an option to force the download again from the same source.
Additional context
Even the pulp rpm sync using the repository and the remote created didn't work. It seems to download something but is not published at the end like the rest of the content.
The text was updated successfully, but these errors were encountered: