-
Notifications
You must be signed in to change notification settings - Fork 7.1k
[Data] Introduce seams to DefaultAutoscaler2 to make it more testable #59933
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request refactors DefaultClusterAutoscalerV2 to support dependency injection for its autoscaling coordinator and node count retrieval logic, making it more testable. It introduces a new FakeAutoscalingCoordinator for testing purposes, which always allocates requested resources. Correspondingly, the tests for DefaultClusterAutoscalerV2 are updated to utilize this new injection mechanism and the fake coordinator, replacing previous mocking. Review comments suggest minor code style improvements, including using the or operator for concise default value assignment in DefaultClusterAutoscalerV2's __init__ method and dict.pop() for more concise item removal in FakeAutoscalingCoordinator.
python/ray/data/_internal/cluster_autoscaler/default_cluster_autoscaler_v2.py
Show resolved
Hide resolved
python/ray/data/_internal/cluster_autoscaler/fake_autoscaling_coordinator.py
Show resolved
Hide resolved
… for testing Signed-off-by: 400Ping <fourhundredping@gmail.com>
Signed-off-by: 400Ping <fourhundredping@gmail.com>
91d81c5 to
57efe34
Compare
|
@bveeramani PTAL |
Description
The
DefaultAutoscaler2implementation needs anAutoscalingCoordinatorand a way to get all of the_NodeResourceSpec.Currently, we can't explicitly inject fake implementations of either dependency. This is problematic because the tests need to assume what the implementation of each dependency looks like and use brittle mocks.
To solve this:
FakeAutoscalingCoordinatorimplementation to a newfake_autoscaling_coordinator.pymodule (you can use the code below)DefaultClusterAutoscalerV2has two new parametersautoscaling_coordinator: Optional[AutoscalingCoordinator] = Noneandget_node_counts: Callable[[], Dict[_NodeResourceSpec, int]] = get_node_resource_spec_and_count. Ifautoscaling_coordinatoris None, you can use the default implementation.test_try_scale_up_clusterto use the explicit seams rather than mocks. Where possible, assert against the public interface rather than implementation detailsRelated issues
Closes #59683