-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add azure as platform and hypershift as subplatform #28
Conversation
9364546
to
3f2b343
Compare
hcp-burner.ini
Outdated
workload_script_path = "workloads/kube-burner-ocp-wrapper" | ||
enable_workload = True | ||
workload_script = run.sh | ||
workload = cluster-density-v2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldn't this remain cluster-density-ms
? v2
might be too much for concurrent workloads.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
restored
hcp-burner.py
Outdated
@@ -85,7 +85,7 @@ | |||
raise | |||
|
|||
if str(platform.environment["cleanup_clusters"]).lower() == "true": | |||
platform = utils.get_cluster_info(platform) | |||
# platform = utils.get_cluster_info(platform) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is required to run the cleanup-cluster
separately if we ever need to run the cleanup job on an failed/in-complete executions, this function will read the clusters info for the given uuid
and cluster_prefix
and prepare the dict used in later part. Please let me know if has to be removed to make this work.
PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
restored
@@ -132,16 +132,14 @@ def get_cluster_info(self, platform): | |||
platform.environment["clusters"][cluster_name]["path"] = platform.environment["path"] + "/" + cluster_name | |||
platform.environment["clusters"][cluster_name]["kubeconfig"] = platform.environment["clusters"][cluster_name]["path"] + "/kubeconfig" | |||
platform.environment['clusters'][cluster_name]['workers'] = int(platform.environment["workers"].split(",")[(loop_counter - 1) % len(platform.environment["workers"].split(","))]) | |||
cluster_mc = platform.get_mc(platform.get_cluster_id(cluster_name)) | |||
platform.environment["mc_kubeconfig"] = platform.environment["path"] + "/kubeconfig_" + cluster_mc |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this intentional ? I had included this to store the MC kubeconfig for later purpose to call cleanup cluster independently. Let me know if you want this to be removed as well.
PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MC is a concept of hypershift and this project is expected to work also with rosa classic installations so, if any platform requires a MC to create/delete any cluster, this should be a custom function inside the required platform, and not used directly from utils package.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR #29 PTAL
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm, tested.
Type of change
Description
Adding azure and hypershift cli platform/subplatform
Related Tickets & Documents
Checklist before requesting a review