This repository has been archived by the owner on Dec 16, 2022. It is now read-only.
v1.2.0
What's new
Changed ⚠️
- Enforced stricter typing requirements around the use of
Optional[T]
types. - Changed the behavior of
Lazy
types infrom_params
methods. Previously, if you defined aLazy
parameter like
foo: Lazy[Foo] = None
in a customfrom_params
classmethod, thenfoo
would actually never beNone
.
This behavior is now different. If no params were given forfoo
, it will beNone
.
You can also now set default values for foo likefoo: Lazy[Foo] = Lazy(Foo)
.
Or, if you want you want a default value but also want to allow forNone
values, you can
write it like this:foo: Optional[Lazy[Foo]] = Lazy(Foo)
. - Added support for PyTorch version 1.7.
Fixed ✅
- Made it possible to instantiate
TrainerCallback
from config files. - Fixed the remaining broken internal links in the API docs.
- Fixed a bug where Hotflip would crash with a model that had multiple TokenIndexers and the input
used rare vocabulary items. - Fixed a bug where
BeamSearch
would fail ifmax_steps
was equal to 1.
Commits
7f85c74 fix docker build (#4762)
cc9ac0f ensure dataclasses not installed in CI (#4754)
812ac57 Fix hotflip bug where vocab items were not re-encoded correctly (#4759)
aeb6d36 revert samplers and fix bug when max_steps=1 (#4760)
baca754 Make returning token type id default in transformers intra word tokenization. (#4758)
5d6670c Update torch requirement from <1.7.0,>=1.6.0 to >=1.6.0,<1.8.0 (#4753)
0ad228d a few small doc fixes (#4752)
71a98c2 stricter typing for Optional[T] types, improve handling of Lazy params (#4743)
27edfbf Add end+trainer callbacks to Trainer.from_partial_objects (#4751)
b792c83 Fix device mismatch bug for categorical accuracy metric in distributed training (#4744)