-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Decommission Hub] Carbon Plan #3483
Comments
Assigned @yuvipanda to process this decommission given he will be around during the requested shutdown time. |
Agreed. See https://2i2c.freshdesk.com/a/tickets/1156 for the follow-up with CarbonPlan. Looks like there might be a 30min already scheduled with @colliand for after AGU. |
@yuvipanda I'd be glad to chat separately if the already scheduled block at 12-12:30pm ET tomorrow doesn't work for you. Thank you all for your work on the decommissioning. On the "Confirm if there is any data to migrate from the hub before decommissioning ", does this refer to backing up the NFS? I've worked with users to backup their individual home directories, but I'm curious about whether you have recommendations regarding creating a backup of the full file storage system. |
@maxrjones I can tar up everyone's homedirs and put it in your homedir. how does that sound? I'll try to join the meeting if I can! |
This seems like a good idea to me, thank you! I know this is unlikely, but we had a Hub on GCP that was decommissioned sometime during late summer to early fall 2022. One person who was on leave during that time lost their home dir during that process. Do you happen to have a record of if any backing up happened, or if the storage part of that infrastructure remains? |
It was wonderful chatting with you today, @maxrjones! There's now a Unfortunately I don't think anything remains from the GCP time, so we can not retrieve that. |
Great chatting with you as well! Great, thank you! I successfully downloaded it. |
Great, @maxrjones! I'll decomission this sometime in the next few days and update this issue. |
@maxrjones I've cleaned up all the resources that we created. I see a couple of nebari related resources, so have not touched them. I'd appreciate if you (or whoever is taking care of the nebari stuff) could take a look to make sure that all the resources (EBS volumes in particular) that are still present are expected to be there, and not leftovers from our cluster! |
Thanks, Yuvi! I'll take a look tomorrow. |
FYI I think removing CarbonPlan is messing up our global usage dashboard, I've opened an issue here: |
I reached out to the Nebari folks about the EBS volumes and confirmed those resources are still present. There's also a handful of older volumes from 2021 named |
@maxrjones ah, I'll take a look at those early next week and see what they may be and get back to you! |
@maxrjones I took a quick look and these can also all be removed. |
@maxrjones just wanted to check in and see if there's anything else we can do to help :) |
@maxrjones I'm going to close this one out! Let us know if there's anything more we need to do :) It was great working with you over the last few years! <3 |
Thanks @yuvipanda! It was great working with you as well. Super grateful for all that you and your team members bring to the open source community! |
Summary
CS&S forwarded a request sent on 2023-11-30 requesting that the Carbon Plan hub be decommissioned. The message includes text:
Info
Task List
Phase I
Phase II - Hub Removal
(These steps are described in more detail in the docs at https://infrastructure.2i2c.org/en/latest/hub-deployment-guide/hubs/other-hub-ops/delete-hub.html)
config/clusters/<cluster_name>/<hub_name>.values.yaml
files. A complete list of relevant files can be found under the appropriate entry in the associatedcluster.yaml
file.config/clusters/<cluster_name>/cluster.yaml
file.helm --namespace HUB_NAME delete HUB_NAME
kubectl delete namespace HUB_NAME
Phase III - Cluster Removal
This phase is only necessary for single hub clusters.
deployer grafana central-ds remove CLUSTER_NAME
terraform plan -destroy
andterraform apply
from the appropriate workspace, to destroy the clusterterraform workspace delete <NAME>
config/clusters/<cluster_name>
directory and all its contentsdeploy-hubs.yaml
validate-clusters.yaml
The text was updated successfully, but these errors were encountered: