You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
很感谢您的分享让我能进一步了解ESRCN。其中我遇到了一个问题,希望您有时间的时候帮忙解答下。
问题如下:
请问下面论文描述的部分对应的程序哪部分?
3.2. Implementation details
For the ESPCN, we set l = 3, (f1, n1) = (5, 64), (f2, n2) = (3, 32) and f3 = 3 in our evaluations. The choice of the parameter is inspired by SRCNN’s 3 layer 9-5-5 model and the equations in Sec. 2.2. In the training phase,17r × 17r pixel sub-images are extracted from the training ground truth images IHR, where r is the upscaling factor. To synthesize the low-resolution samples ILR, we blur IHR using a Gaussian filter and sub-sample it by the upscaling factor. The sub-images are extracted from original images with a stride of (17 − Pmod (f, 2)) × r from IHR and a stride of 17 − Pmod (f, 2) from ILR. This ensures that all pixels in the original image appear once and only once as the ground truth of the training data. We choose tanh instead of relu as the activation function for the final model motivated by our experimental results.
谢谢您的浏览,期待您的回复,祝生活愉快!
The text was updated successfully, but these errors were encountered:
很感谢您的分享让我能进一步了解ESRCN。其中我遇到了一个问题,希望您有时间的时候帮忙解答下。
问题如下:
请问下面论文描述的部分对应的程序哪部分?
3.2. Implementation details
For the ESPCN, we set l = 3, (f1, n1) = (5, 64), (f2, n2) = (3, 32) and f3 = 3 in our evaluations. The choice of the parameter is inspired by SRCNN’s 3 layer 9-5-5 model and the equations in Sec. 2.2. In the training phase,17r × 17r pixel sub-images are extracted from the training ground truth images IHR, where r is the upscaling factor. To synthesize the low-resolution samples ILR, we blur IHR using a Gaussian filter and sub-sample it by the upscaling factor. The sub-images are extracted from original images with a stride of (17 − Pmod (f, 2)) × r from IHR and a stride of 17 − Pmod (f, 2) from ILR. This ensures that all pixels in the original image appear once and only once as the ground truth of the training data. We choose tanh instead of relu as the activation function for the final model motivated by our experimental results.
谢谢您的浏览,期待您的回复,祝生活愉快!
The text was updated successfully, but these errors were encountered: