Skip to content

Commit

Permalink
Merge pull request google-research#270 from ywkim/typo
Browse files Browse the repository at this point in the history
Fix typo in optimization.py
  • Loading branch information
jacobdevlin-google authored Dec 18, 2018
2 parents 85f453a + 933848a commit 7f51d42
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion optimization.py
Original file line number Diff line number Diff line change
Expand Up @@ -137,7 +137,7 @@ def apply_gradients(self, grads_and_vars, global_step=None, name=None):
# the correct way of using L2 regularization/weight decay with Adam,
# since that will interact with the m and v parameters in strange ways.
#
# Instead we want ot decay the weights in a manner that doesn't interact
# Instead we want to decay the weights in a manner that doesn't interact
# with the m/v parameters. This is equivalent to adding the square
# of the weights to the loss with plain (non-momentum) SGD.
if self._do_use_weight_decay(param_name):
Expand Down

0 comments on commit 7f51d42

Please sign in to comment.