You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the training phase you have to feed the input at each time step to the decoder and it's relevant target. Also we should modify our sequence outputs by adding padding and end of sentence tokens.
You might wonder why these padding and EOS tokens?
In the training we have to feed inputs to decoder as the decoder sequence with added EOS. Then targets should be the same sequence which is one time step ahead.
Let's take an example
a,b,c,d (input) -> p,q,r (output)
So in the decoder the input seq should be EOS,p,q,r
The targets should be p,q,r,0
Can you please explain a little bit? Thanks!
The text was updated successfully, but these errors were encountered: