Skip to content

Latest commit

 

History

History
92 lines (73 loc) · 4.2 KB

NEWS.md

File metadata and controls

92 lines (73 loc) · 4.2 KB

mlr3torch dev

  • feat: add po("nn_identity")
  • feat: Add LearnerTorchModule for easily creating torch learners from torch modules.
  • feat: TorchIngressToken now also can take a Selector as argument features.
  • feat: Added encoders for numericals and categoricals
  • feat: Added po("nn_fn") for calling custom functions in a network.
  • feat: Added po("nn_ft_cls") for concatenating a CLS token to a tokenized input.
  • BREAKING_CHANGE: The output dimension of neural networks for binary classification tasks is now expected to be 1 and not 2 as before. The behavior of nn("head") was also changed to match this. This means that for binary classification tasks, t_loss("cross_entropy") now generates nn_bce_with_logits_loss instead of nn_cross_entropy_loss. This also came with a reparametrization of the t_loss("cross_entropy") loss (thanks to @tdhock, #374).
  • feat: Added function lazy_shape() to get the shape of a lazy tensor.
  • feat: Better error messages for MLP and TabResNet learners.
  • feat: TabResNet learner now supports lazy tensors.
  • feat: The LearnerTorch base class now supports the private method $.ingress_tokens(task, param_vals) for generating the torch::dataset.
  • feat: nn("block") (which allows to repeat the same network segment multiple times) now has an extra argument trafo, which allows to modify the parameter values per layer.
  • feat: Shapes can now have multiple NAs and not only the batch dimension can be missing. However, most nn() operators still expect only one missing values and will throw an error if multiple dimensions are unknown.

mlr3torch 0.2.1

Bug Fixes:

  • LearnerTorchModel can now be parallelized and trained with encapsulation activated.
  • jit_trace now works in combination with batch normalization.
  • Ensures compatibility with R6 version 2.6.0

mlr3torch 0.2.0

Breaking Changes

  • Removed some optimizers for which no fast ('ignite') variant exists.
  • The default optimizer is now AdamW instead of Adam.
  • The private LearnerTorch$.dataloader() method now operates no longer on the task but on the dataset generated by the private LearnerTorch$.dataset() method.
  • The shuffle parameter during model training is now initialized to TRUE to sidestep issues where data is sorted.

Performance Improvements

  • Optimizers now use the faster ('ignite') version of the optimizers, which leads to considerable speed improvements.
  • The jit_trace parameter was added to LearnerTorch, which when set to TRUE can lead to significant speedups. This should only be enabled for 'static' models, see the torch tutorial for more information.
  • Added parameter num_interop_threads to LearnerTorch.
  • The tensor_dataset parameter was added, which allows to stack all batches at the beginning of training to make loading of batches afterwards faster.
  • Use a faster default image loader.

Features

  • Added PipeOp for adaptive average pooling.
  • The n_layers parameter was added to the MLP learner.
  • Added multimodal melanoma and cifar{10, 100} example tasks.
  • Added a callback to iteratively unfreeze parameters for finetuning.
  • Added different learning rate schedulers as callbacks.

Bug Fixes:

  • Torch learners can now be used with AutoTuner.
  • Early stopping now not uses epochs - patience for the internally tuned values instead of the trained number of epochs as it was before.
  • The dataset of a learner must no longer return the tensors on the specified device, which allows for parallel dataloading on GPUs.
  • PipeOpBlock should no longer create ID clashes with other PipeOps in the graph (#260).

mlr3torch 0.1.2

  • Don't use deprecated data_formats anymore
  • Added CallbackSetTB, which allows logging that can be viewed by TensorBoard.

mlr3torch 0.1.1

  • fix(preprocessing): regarding the construction of some PipeOps such as po("trafo_resize") which failed in some cases.
  • fix(ci): tests were not run in the CI
  • fix(learner): LearnerTabResnet now works correctly
  • Fix that tests were not run in the CI
  • feat: added the nn() helper function to simplify the creation of neural network layers

mlr3torch 0.1.0

  • Initial CRAN release