From 025d89e425862c0629415b08b1dbfbee7539deaf Mon Sep 17 00:00:00 2001 From: Daniel Falbel Date: Thu, 30 Jan 2025 14:01:10 -0300 Subject: [PATCH] Polish NEWS --- NEWS.md | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/NEWS.md b/NEWS.md index a4c331e3f3..5f8f7c806a 100644 --- a/NEWS.md +++ b/NEWS.md @@ -1,18 +1,22 @@ # torch 0.14.0 -# torch (development version) +## Breaking changes + +- Updated to LibTorch v2.5.1 (#1204) -- potentially breaking change! + +## New features + +- Feature: Faster optimizers (`optim_ignite_()`) are available: Adam, AdamW, Adagrad, RMSprop,SGD. + These can be used as drop-in replacements for `optim_` but are considerably + faster as they wrap the LibTorch implementation of the optimizer. + The biggest speed differences can be observed for complex optimizers such as `AdamW`. ## Bug fixes - `torch_iinfo()` now support all integer dtypes (#1190 @cregouby) - Fixed float key_padding_mask in `nnf_multi_head_attention_forward()` (#1205) -- Updated to LibTorch v2.5.1 (#1204) - Fix french translation (#1176 @cregouby) - Trace jitted modules now respect 'train' and 'eval' mode (#1211) -- Feature: Faster optimizers (`optim_ignite_()`) are available: Adam, AdamW, Adagrad, RMSprop,SGD. - These can be used as drop-in replacements for `optim_` but are considerably - faster as they wrap the LibTorch implementation of the optimizer. - The biggest speed differences can be observed for complex optimizers such as `AdamW`. * Fix: Avoid name clashes between multiple calls to `jit_trace` (#1246) # torch 0.13.0