-
Notifications
You must be signed in to change notification settings - Fork 289
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[fix] Add first draft of the PR for issue#349 #365
base: development
Are you sure you want to change the base?
[fix] Add first draft of the PR for issue#349 #365
Conversation
…oml#334) * [feat] Support statistics print by adding results manager object * [refactor] Make SearchResults extract run_history at __init__ Since the search results should not be kept in eternally, I made this class to take run_history in __init__ so that we can implicitly call extraction inside. From this change, the call of extraction from outside is not recommended. However, you can still call it from outside and to prevent mixup of the environment, self.clear() will be called. * [fix] Separate those changes into PR#336 * [fix] Fix so that test_loss includes all the metrics * [enhance] Strengthen the test for sprint and SearchResults * [fix] Fix an issue in documentation * [enhance] Increase the coverage * [refactor] Separate the test for results_manager to organize the structure * [test] Add the test for get_incumbent_Result * [test] Remove the previous test_get_incumbent and see the coverage * [fix] [test] Fix reversion of metric and strengthen the test cases * [fix] Fix flake8 issues and increase coverage * [fix] Address Ravin's comments * [enhance] Increase the coverage * [fix] Fix a flake8 issu
* [doc] Add workflow of the AutoPytorch * [doc] Address Ravin's comment
* [feat] Add an object that realizes the perf over time viz * [fix] Modify TODOs and add comments to avoid complications * [refactor] [feat] Format visualizer API and integrate this feature into BaseTask * [refactor] Separate a shared raise error process as a function * [refactor] Gather params in Dataclass to look smarter * [refactor] Merge extraction from history to the result manager Since this feature was added in a previous PR, we now rely on this feature to extract the history. To handle the order by the start time issue, I added the sort by endtime feature. * [feat] Merge the viz in the latest version * [fix] Fix nan --> worst val so that we can always handle by number * [fix] Fix mypy issues * [test] Add test for get_start_time * [test] Add test for order by end time * [test] Add tests for ensemble results * [test] Add tests for merging ensemble results and run history * [test] Add the tests in the case of ensemble_results is None * [fix] Alternate datetime to timestamp in tests to pass universally Since the mapping of timestamp to datetime variates on machine, the tests failed in the previous version. In this version, we changed the datetime in the tests to the fixed timestamp so that the tests will pass universally. * [fix] Fix status_msg --> status_type because it does not need to be str * [fix] Change the name for the homogeniety * [fix] Fix based on the file name change * [test] Add tests for set_plot_args * [test] Add tests for plot_perf_over_time in BaseTask * [refactor] Replace redundant lines by pytest parametrization * [test] Add tests for _get_perf_and_time * [fix] Remove viz attribute based on Ravin's comment * [fix] Fix doc-string based on Ravin's comments * [refactor] Hide color label settings extraction in dataclass Since this process makes the method in BaseTask redundant and this was pointed out by Ravin, I made this process a method of dataclass so that we can easily fetch this information. Note that since the color and label information always depend on the optimization results, we always need to pass metric results to ensure we only get related keys. * [test] Add tests for color label dicts extraction * [test] Add tests for checking if plt.show is called or not * [refactor] Address Ravin's comments and add TODO for the refactoring * [refactor] Change KeyError in EnsembleResults to empty Since it is not convenient to not be able to instantiate EnsembleResults in the case when we do not have any histories, I changed the functionality so that we can still instantiate even when the results are empty. In this case, we have empty arrays and it also matches the developers intuition. * [refactor] Prohibit external updates to make objects more robust * [fix] Remove a member variable _opt_scores since it is confusing Since opt_scores are taken from cost in run_history and metric_dict takes from additional_info, it was confusing for me where I should refer to what. By removing this, we can always refer to additional_info when fetching information and metrics are always available as a raw value. Although I changed a lot, the functionality did not change and it is easier to add any other functionalities now. * [example] Add an example how to plot performance over time * [fix] Fix unexpected train loss when using cross validation * [fix] Remove __main__ from example based on the Ravin's comment * [fix] Move results_xxx to utils from API * [enhance] Change example for the plot over time to save fig Since the plt.show() does not work on some environments, I changed the example so that everyone can run at least this example.
* cleanup of simple_imputer * Fixed doc and typo * Fixed docs * Made changes, added test * Fixed init statement * Fixed docs * Flake'd
…#351) * [feat] Add the option to save a figure in plot setting params Since non-GUI based environments would like to avoid the usage of show method in the matplotlib, I added the option to savefig and thus users can complete the operations inside AutoPytorch. * [doc] Add a comment for non-GUI based computer in plot_perf_over_time method * [test] Add a test to check the priority of show and savefig Since plt.savefig and plt.show do not work at the same time due to the matplotlib design, we need to check whether show will not be called when a figname is specified. We can actually raise an error, but plot will be basically called in the end of an optimization, so I wanted to avoid raising an error and just sticked to a check by tests.
* update workflow files * Remove double quotes * Exclude python 3.10 * Fix mypy compliance check * Added PEP 561 compliance * Add py.typed to MANIFEST for dist * Update .github/workflows/dist.yml Co-authored-by: Ravin Kohli <13005107+ravinkohli@users.noreply.github.com> Co-authored-by: Ravin Kohli <13005107+ravinkohli@users.noreply.github.com>
* Add fit pipeline with tests * Add documentation for get dataset * update documentation * fix tests * remove permutation importance from visualisation example * change disable_file_output * add * fix flake * fix test and examples * change type of disable_file_output * Address comments from eddie * fix docstring in api * fix tests for base api * fix tests for base api * fix tests after rebase * reduce dataset size in example * remove optional from doc string * Handle unsuccessful fitting of pipeline better * fix flake in tests * change to default configuration for documentation * add warning for no ensemble created when y_optimization in disable_file_output * reduce budget for single configuration * address comments from eddie * address comments from shuhei * Add autoPyTorchEnum * fix flake in tests * address comments from shuhei * Apply suggestions from code review Co-authored-by: nabenabe0928 <47781922+nabenabe0928@users.noreply.github.com> * fix flake * use **dataset_kwargs * fix flake * change to enforce keyword args Co-authored-by: nabenabe0928 <47781922+nabenabe0928@users.noreply.github.com>
* Add workflow for publishing docker image to github packages and dockerhub * add docker installation to docs * add workflow dispatch
Codecov Report
@@ Coverage Diff @@
## development #365 +/- ##
===============================================
+ Coverage 83.44% 85.30% +1.85%
===============================================
Files 163 163
Lines 9634 9567 -67
Branches 1689 1665 -24
===============================================
+ Hits 8039 8161 +122
+ Misses 1114 953 -161
+ Partials 481 453 -28
Continue to review full report at Codecov.
|
4b322b5
to
ffb4dae
Compare
return cost, status, info, additional_run_info | ||
|
||
|
||
class TargetAlgorithmQuery(AbstractTAFunc): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think ExecuteTAFuncWithQueue
was more appropriate as a name. Also, it keeps AutoPyTorch
in line with SMAC
and auto-sklearn
. Let's keep it that way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this is not urgent, I will just reply to this message and another for now.
Although I agree that TargetAlgorithmQuery
is not compatible with auto-sklearn
, this name is still compatible with SMAC
and the name itself is more appropriate because of the following problems in the previous name.
Apology
I checked official terminology and it seems Entry
is the official name but not Query
, so TargetAlgorithmQueueEntry
or TAFunc4QueueEntry
is more precise.
End of Apology
The problems of ExecuteTaFuncWithQueue
are that:
- the name starts from a verb, which is confusing for a class name except the class is callable,
- the name itself is not correct because we run TA func (not execute as the method name is
run()
) when we query the entry from Queue, but we never execute TA func with Queue, - instances of this class are stored in Queue as a query, so it will not be executed when it is instantiated
- Somehow it says
TaFunc
, but notTAFunc
It is impossible to merge BaseTask
with the one in ASK, so I did not feel the compatibility with ASK in TAE
is necessary, but do you think we need it?
@@ -110,90 +255,66 @@ def __init__( | |||
stats: Optional[Stats] = None, | |||
run_obj: str = 'quality', | |||
par_factor: int = 1, | |||
output_y_hat_optimization: bool = True, | |||
save_y_opt: bool = True, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
actually, I think save_y_ensemble_optimization
or even output_y_ensemble_optimization
as this flag also acts as a way to disable ensemble construction. As if it is False
we cant build an ensemble. We can also add this to the description of the variable. Actually, it has got nothing to do with saving y_opt
, these true_targets_ensemble
are only used for ensemble construction
backend=backend, | ||
seed=seed, | ||
metric=metric, | ||
save_y_opt=save_y_opt, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here. please change the name.
search_space_updates = self.fixed_pipeline_params.search_space_updates | ||
self.logger.debug(f"Search space updates for {num_run}: {search_space_updates}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I dont think we need to specially create a variable just to log out the search_space_updates
. I added it because I wanted to see if they are being passed correctly when I wrote this functionality.
Especially when they will be printed for each run, when in fact they are the same for all num_runs.
search_space_updates = self.fixed_pipeline_params.search_space_updates | |
self.logger.debug(f"Search space updates for {num_run}: {search_space_updates}") |
try: | ||
obj = pynisher.enforce_limits(**pynisher_arguments)(self.ta) | ||
obj(**obj_kwargs) | ||
obj(queue=queue, evaluator_params=params, fixed_pipeline_params=self.fixed_pipeline_params) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we name this pynisher_function_wrapper_obj
? I think it will make it easier to distinguish between the exit_status of pynisher_function_wrapper_obj
and the Status
coming from fitting the pipeline. As we have now encapsulated the code to process the results, I think having more meaningful names here will help us and others to understand whats's going on.
|
||
|
||
def _process_exceptions( | ||
obj: PynisherFunctionWrapperType, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changing obj
to pynisher_function_wrapper_obj
should also be done here.
budget: float, | ||
worst_possible_result: float | ||
) -> ProcessedResultsType: | ||
if obj.exit_status is TAEAbortException: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this way of implementing the logic is a bit confusing. Could you rather maybe handle the exception first and then handle creating the additional_run_info
based on that? Mainly I am struggling to find the use of is_anything_exception
. Moreover, I also think that you are missing some info for example, info_for_empty
in case of a MEMOUT
previously also contained the memory limit as well. I'd suggest you to add both the memory_limit
and func_eval_time
which is available in run_info.cutoff
.
from autoPyTorch.evaluation.tae import ExecuteTaFuncWithQueue, get_cost_of_crash | ||
from autoPyTorch.evaluation.abstract_evaluator import fit_pipeline | ||
from autoPyTorch.evaluation.pipeline_class_collection import get_default_pipeline_config | ||
from autoPyTorch.evaluation.tae import TargetAlgorithmQuery |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also change back to ExecuteTaFuncWithQueue
.
@@ -669,22 +670,23 @@ def _do_dummy_prediction(self) -> None: | |||
# already be generated here! | |||
stats = Stats(scenario_mock) | |||
stats.start_timing() | |||
ta = ExecuteTaFuncWithQueue( | |||
taq = TargetAlgorithmQuery( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here as well
stats=stats, | ||
memory_limit=memory_limit, | ||
disable_file_output=self._disable_file_output, | ||
all_supported_metrics=self._all_supported_metrics | ||
) | ||
|
||
status, _, _, additional_info = ta.run(num_run, cutoff=self._time_for_task) | ||
status, _, _, additional_info = taq.run(num_run, cutoff=self._time_for_task) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if you dont like ta
you can maybe use tae
which stands for target algorithm execution.
do not save the predictions for the optimization set, | ||
which would later on be used to build an ensemble. Note that SMAC | ||
optimizes a metric evaluated on the optimization set. | ||
+ `pipeline`: | ||
+ `model`: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I prefer the name pipeline
.
do not save any individual pipeline files | ||
+ `pipelines`: | ||
+ `cv_model`: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can use cv_pipeline
instead.
@@ -1060,7 +1062,7 @@ def _search( | |||
|
|||
# Here the budget is set to max because the SMAC intensifier can be: | |||
# Hyperband: in this case the budget is determined on the fly and overwritten | |||
# by the ExecuteTaFuncWithQueue | |||
# by the TargetAlgorithmQuery |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
change here as well.
@@ -1344,7 +1346,7 @@ def refit( | |||
dataset_properties=dataset_properties, | |||
dataset=dataset, | |||
split_id=split_id) | |||
fit_and_suppress_warnings(self._logger, model, X, y=None) | |||
fit_pipeline(self._logger, model, X, y=None) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the previous name emphasised the fact that we are suppressing warnings, otherwise, we could have just used model.fit(X, y)
. So could you change this name back to what it was? Or I dont mind fit_pipeline_supress_warnings
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is another question that is mentioned in the other comment.
This is maybe a stupid question, but why do we need to emphasize suppress warnings
?
I understand it if we squeeze errors during the fit, but warnings will not be critical for running a script and we write them in logging anyways.
So it is more like fit_pipeline_with_logging_warnings
or superficially, this method looks identically to just fit_pipeline
, doesn't it?
|
||
class FixedPipelineParams(NamedTuple): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean by "fixed" pipeline params?
access the train and test datasets | ||
queue (Queue): | ||
Each worker available will instantiate an evaluator, and after completion, | ||
it will append the result to a multiprocessing queue | ||
metric (autoPyTorchMetric): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
metric (autoPyTorchMetric): | |
optimize_metric (autoPyTorchMetric): |
* check if N==0, and handle this case * change position of comment * Address comments from shuhei
* add test evaluator * add no resampling and other changes for test evaluator * finalise changes for test_evaluator, TODO: tests * add tests for new functionality * fix flake and mypy * add documentation for the evaluator * add NoResampling to fit_pipeline * raise error when trying to construct ensemble with noresampling * fix tests * reduce fit_pipeline accuracy check * Apply suggestions from code review Co-authored-by: nabenabe0928 <47781922+nabenabe0928@users.noreply.github.com> * address comments from shuhei * fix bug in base data loader * fix bug in data loader for val set * fix bugs introduced in suggestions * fix flake * fix bug in test preprocessing * fix bug in test data loader * merge tests for evaluators and change listcomp in get_best_epoch * rename resampling strategies * add test for get dataset Co-authored-by: nabenabe0928 <47781922+nabenabe0928@users.noreply.github.com>
* [fix] Fix the no-training-issue when using simple intensifier * [test] Add a test for the modification * [fix] Modify the default budget so that the budget is compatible Since the previous version does not consider the provided budget_type when determining the default budget, I modified this part so that the default budget does not mix up the default budget for epochs and runtime. Note that since the default pipeline config defines epochs as the default budget, I also followed this rule when taking the default value. * [fix] Fix a mypy error * [fix] Change the total runtime for single config in the example Since the training sometimes does not finish in time, I increased the total runtime for the training so that we can accomodate the training in the given amount of time. * [fix] [refactor] Fix the SMAC requirement and refactor some conditions
c41c87f
to
4d4e306
Compare
* add variance thresholding * fix flake and mypy * Apply suggestions from code review Co-authored-by: nabenabe0928 <47781922+nabenabe0928@users.noreply.github.com> Co-authored-by: nabenabe0928 <47781922+nabenabe0928@users.noreply.github.com>
* Add new scalers * fix flake and mypy * Apply suggestions from code review Co-authored-by: nabenabe0928 <47781922+nabenabe0928@users.noreply.github.com> * add robust scaler * fix documentation * remove power transformer from feature preprocessing * fix tests * check for default in include and exclude * Apply suggestions from code review Co-authored-by: nabenabe0928 <47781922+nabenabe0928@users.noreply.github.com> Co-authored-by: nabenabe0928 <47781922+nabenabe0928@users.noreply.github.com>
* remove categorical strategy from simple imputer * fix tests * address comments from eddie * fix flake and mypy error * fix test cases for imputation
* [fix] Add check dataset in transform as well for test dataset, which does not require fit * [test] Migrate tests from the francisco's PR without modifications * [fix] Modify so that tests pass * [test] Increase the coverage
* Fix: keyword arguments to submit * Fix: Missing param for implementing AbstractTA * Fix: Typing of multi_objectives * Add: mutli_objectives to each ExecuteTaFucnWithQueue
6b577f6
to
c13e13a
Compare
* remove datamanager instances from evaluation and smbo * fix flake * Apply suggestions from code review Co-authored-by: nabenabe0928 <47781922+nabenabe0928@users.noreply.github.com> * fix flake Co-authored-by: nabenabe0928 <47781922+nabenabe0928@users.noreply.github.com>
* [fix] Fix the task inference issue mentioned in automl#352 Since sklearn task inference regards targets with integers as a classification task, I modified target_validator so that we always cast targets for regression to float. This workaround is mentioned in the reference below: scikit-learn/scikit-learn#8952 * [fix] [test] Add a small number to label for regression and add tests Since target labels are required to be float and sklearn requires numbers after a decimal point, I added a workaround to add the almost possible minimum fraction to array so that we can avoid a mis-inference of task type from sklearn. Plus, I added tests to check if we get the expected results for extreme cases. * [fix] [test] Adapt the modification of targets to scipy.sparse.xxx_matrix * [fix] Address Ravin's comments and loosen the small number choice
e48fd14
to
8d9c132
Compare
* Initial implementation without tests * add tests and make necessary changes * improve documentation * fix tests * Apply suggestions from code review Co-authored-by: nabenabe0928 <47781922+nabenabe0928@users.noreply.github.com> * undo change in as it causes tests to fail * change name from InputValidator to input_validator * extract statements to methods * refactor code * check if mapping is the same as expected * update precision reduction for dataframes and tests * fix flake Co-authored-by: nabenabe0928 <47781922+nabenabe0928@users.noreply.github.com>
* [refactor] Refactor __init__ of abstract evaluator * [refactor] Collect shared variables in NamedTuples * [fix] Copy the budget passed to the evaluator params * [refactor] Add cross validation result manager for separate management * [refactor] Separate pipeline classes from abstract evaluator * [refactor] Increase the safety level of pipeline config * [test] Add tests for the changes * [test] Modify queue.empty in a safer way [fix] Find the error in test_tabular_xxx Since pipeline is updated after the evaluations and the previous code updated self.pipeline in the predict method, dummy class only needs to override this method. However, the new code does it separately, so I override get_pipeline method so that we can reproduce the same results. [fix] Fix the shape issue in regression and add bug comment in a test [fix] Fix the ground truth of test_cv Since we changed the weighting strategy for the cross validation in the validation phase so that we weight performance from each model proportionally to the size of each VALIDATION split. I needed to change the answer. Note that the previous was weighting the performance proportionally to the TRAINING splits for both training and validation phases. [fix] Change qsize --> Empty since qsize might not be reliable [refactor] Add cost for crash in autoPyTorchMetrics [fix] Fix the issue when taking num_classes from regression task [fix] Deactivate the save of cv model in the case of holdout
[test] Add the tests for the instantiation of abstract evaluator 1 -- 3 [test] Add the tests for util 1 -- 2 [test] Add the tests for train_evaluator 1 -- 2 [refactor] [test] Clean up the pipeline classes and add tests for it 1 -- 2 [test] Add the tests for tae 1 -- 4 [fix] Fix an error due to the change in extract learning curve [experimental] Increase the coverage [test] Add tests for pipeline repr Since the modifications in tests removed the coverage on pipeline repr, I added tests to increase those parts. Basically, the decrease in the coverage happened due to the usage of dummy pipelines.
…in_evaluator Since test_evaluator can be merged, I merged it. * [rebase] Rebase and merge the changes in non-test files without issues * [refactor] Merge test- and train-evaluator * [fix] Fix the import error due to the change xxx_evaluator --> evaluator * [test] Fix errors in tests * [fix] Fix the handling of test pred in no resampling * [refactor] Move save_y_opt=False for no resampling deepter for simplicity * [test] Increase the budget size for no resample tests * [test] [fix] Rebase, modify tests, and increase the coverage
8d9c132
to
180ff33
Compare
Types of changes
Note that a Pull Request should only contain one of refactoring, new features or documentation changes.
Please separate these changes and send us individual PRs for each.
For more information on how to create a good pull request, please refer to The anatomy of a perfect pull request.
Checklist:
Description
See issue#349.
Note that although this PR has so many changes, most changes are gathered in three files:
The changes in other files are mostly for tests or deletions because I integrated those features into the aforementioned three files.