All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
- Compatibility with newer PyTorch Benchmark version.
- Version for protobuf during build.
- Conditional install of redis on win platforms
- Device transfer in benchmark.
- Defensive fallback for FLOPs measurement.
- Add MultiStepLR optimizers.
- Profiling to use
pytorch_benchmark
package.
- WandB logger log_dir extraction.
- Profile only warms up on first inference.
- Memory profiling.
- Tune DeprecationWarning.
- Add pred and target dict support in Lifecycle.
- Avoid detaching loss in step.
- Add preprocess_batch method to Lifecycle.
- Add option for string type in utils.name.
- Add Metric Selector.
- Weight freezing during model loading.
- Fix discriminative_lr param selection for NoneType parameters.
- Fix wandb project naming during hparamsearch.
- Optimizer Schedulers take
accumulate_grad_batches
into account.
- Key debug statements while loading models to include both missing and unexpected keys.
- Bumped PL to version 1.4. Holding back on 1.5 due to Tune integration issues.
- Bumped Tune to version 1.8.
- Update profile to use model.call. This enable non-
forward
executions during profiling. - Add DefaultMethods Mixin with
warm_up
to makewarm_up
overloadable by Mixins.
- Fix
warm_up
function signature. - Requirement versions.
warm_up
function that is called prior to profil .
- Learning rate schedulers discounted steps.
- Logging of layers that are unfrozen.
- Cyclic learning rate schedulers now update on step.
- Added explicit logging of model profiling results.
- Automatic assignment of hparams.num_gpus.
- Finetune weight loading checks.
- Cyclic learning rate schedulers account for batch size.
- Feature extraction on GPU.
- Added explicit logging of hparams.
- Pass args correctly to trainer during testing.
- CheckpointEveryNSteps now included in ModelCheckpoint c.f. pl==1.3.
- Import from torchmetrics instead of pl.metrics .
- Moved confusion matrix to RideClassificationDataset and updated plot.
- Feature extraction and visualisation.
- Lifecycle and Finetuneable mixins always included via RideModule.
- Support for pytorch-lightning==1.3.
- Additional tests: Coverage is now at 92%.
- Support for nested inheritance of RideModule.
- Support for pytorch-lightning==1.2.
- Project dependencies: removed click and added psutil to requirements.
- Logging: Save stdout and stderr to run.log.
- Logged results names. Flattened folder structure and streamlines names.
- Docstrings to remaining core classes.
- Tests that logged results exists.
- Add support for namedtuples in dataset
input_shape
andoutput_shape
. - Add tests for test_enemble.
- Expose more classes via
from ride import XXX
. - Fix import-error in hparamsearch.
- Fix issues in metrics and add tests.
- Remove unused cache module.
- Renamed
Dataset
toRideDataset
.
- Documentation for getting started, the Ride API, and a general API reference.
- Automatic import of
SgdOptimizer
.
- Renamed
Dataset
toRideDataset
.
- Initial publicly available implementation of the library.