- feat: add
po("nn_identity")
- feat: Add
LearnerTorchModule
for easily creating torch learners from torch modules. - feat:
TorchIngressToken
now also can take aSelector
as argumentfeatures
. - feat: Added encoders for numericals and categoricals
- feat: Added
po("nn_fn")
for calling custom functions in a network. - feat: Added
po("nn_ft_cls")
for concatenating a CLS token to a tokenized input. - BREAKING_CHANGE: The output dimension of neural networks for binary classification tasks is now
expected to be 1 and not 2 as before. The behavior of
nn("head")
was also changed to match this. This means that for binary classification tasks,t_loss("cross_entropy")
now generatesnn_bce_with_logits_loss
instead ofnn_cross_entropy_loss
. This also came with a reparametrization of thet_loss("cross_entropy")
loss (thanks to @tdhock, #374). - feat: Added function
lazy_shape()
to get the shape of a lazy tensor. - feat: Better error messages for MLP and TabResNet learners.
- feat: TabResNet learner now supports lazy tensors.
- feat: The
LearnerTorch
base class now supports the private method$.ingress_tokens(task, param_vals)
for generating thetorch::dataset
. - feat:
nn("block")
(which allows to repeat the same network segment multiple times) now has an extra argumenttrafo
, which allows to modify the parameter values per layer. - feat: Shapes can now have multiple
NA
s and not only the batch dimension can be missing. However, mostnn()
operators still expect only one missing values and will throw an error if multiple dimensions are unknown.
LearnerTorchModel
can now be parallelized and trained with encapsulation activated.jit_trace
now works in combination with batch normalization.- Ensures compatibility with
R6
version 2.6.0
- Removed some optimizers for which no fast ('ignite') variant exists.
- The default optimizer is now AdamW instead of Adam.
- The private
LearnerTorch$.dataloader()
method now operates no longer on thetask
but on thedataset
generated by the privateLearnerTorch$.dataset()
method. - The
shuffle
parameter during model training is now initialized toTRUE
to sidestep issues where data is sorted.
- Optimizers now use the faster ('ignite') version of the optimizers, which leads to considerable speed improvements.
- The
jit_trace
parameter was added toLearnerTorch
, which when set toTRUE
can lead to significant speedups. This should only be enabled for 'static' models, see the torch tutorial for more information. - Added parameter
num_interop_threads
toLearnerTorch
. - The
tensor_dataset
parameter was added, which allows to stack all batches at the beginning of training to make loading of batches afterwards faster. - Use a faster default image loader.
- Added
PipeOp
for adaptive average pooling. - The
n_layers
parameter was added to the MLP learner. - Added multimodal melanoma and cifar{10, 100} example tasks.
- Added a callback to iteratively unfreeze parameters for finetuning.
- Added different learning rate schedulers as callbacks.
- Torch learners can now be used with
AutoTuner
. - Early stopping now not uses
epochs - patience
for the internally tuned values instead of the trained number ofepochs
as it was before. - The
dataset
of a learner must no longer return the tensors on the specifieddevice
, which allows for parallel dataloading on GPUs. PipeOpBlock
should no longer create ID clashes with other PipeOps in the graph (#260).
- Don't use deprecated
data_formats
anymore - Added
CallbackSetTB
, which allows logging that can be viewed by TensorBoard.
- fix(preprocessing): regarding the construction of some
PipeOps
such aspo("trafo_resize")
which failed in some cases. - fix(ci): tests were not run in the CI
- fix(learner):
LearnerTabResnet
now works correctly - Fix that tests were not run in the CI
- feat: added the
nn()
helper function to simplify the creation of neural network layers
- Initial CRAN release