We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi, I have used the below python file https://github.com/rajatsen91/deepglo/blob/master/DeepGLO/DeepGLO.py
I don't understand the code in the method "step_factF_loss" on the line 255 and 256:
r = loss.detach() / l2.detach() loss = loss + r * reg * l2
1: why use loss devide l2 2: what does the param reg means? It always be 0 in the code.
Many thanks!
The text was updated successfully, but these errors were encountered:
Sorry, something went wrong.
It ensures that the two losses are roughly of the same order but currently it is inactive as reg is set to 0. reg stands for regularization penalty. The code currently does not use regularization here and is therefore set to 0.
Sorry, I'm not sure the meaning of "same order". As I understand it, if we set reg = 0.1,the code is :
r = loss.detach() / l2.detach() loss = loss + r * reg * l2 = loss + r * l2 * reg = loss + loss * reg
The loss seems to be no relation with r.
No branches or pull requests
Hi, I have used the below python file
https://github.com/rajatsen91/deepglo/blob/master/DeepGLO/DeepGLO.py
I don't understand the code in the method "step_factF_loss" on the line 255 and 256:
r = loss.detach() / l2.detach()
loss = loss + r * reg * l2
1: why use loss devide l2
2: what does the param reg means? It always be 0 in the code.
Many thanks!
The text was updated successfully, but these errors were encountered: