-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug反馈 #3
Comments
您好,显存原因可以调小规模训练,关于该问题确实不规范但我测试了一下不影响结果
从 Windows 版邮件发送
发件人: weixians
发送时间: 2022年12月5日 14:01
收件人: leikun-starting/End-to-end-DRL-for-FJSP
抄送: Subscribed
主题: [leikun-starting/End-to-end-DRL-for-FJSP] Bug反馈 (Issue #3)
作者您好,非常有幸能阅读到您这篇论文,让我受益匪浅。
关于论文的代码实现,这里我发现了几个问题。
1. 由于将邻接矩阵转成了sparse,导致内存爆炸,目前还没有人能够使用这份代码进行训练;
2. 代码中有比较明显的低级错误,麻烦您改一下。
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.Message ID: ***@***.***>
|
C:\Users\William.conda\envs\Pytorch\python.exe "C:/Users/William/OneDrive - stu.xjtu.edu.cn/桌面/leikun/End-to-end-DRL-for-FJSP-main/End-to-end-DRL-for-FJSP-main/FJSP_MultiPPO/PPOwithValue.py" |
for me the same |
你可以降低pytorch版本试试,我用的1.4.0,或者按照报错提升修改下,应该没问题 |
The error occurs, because the code uses the same critic for Job and Machine Policy. Therefore the same v value is used in the loss function and after the job policy finished its backward step, the machine policy says, that the gradients are changed. |
I've revised the 'PPOwithValue.py', please check for detials. |
作者您好,非常有幸能阅读到您这篇论文,让我受益匪浅。
关于论文的代码实现,这里我发现了几个问题。
The text was updated successfully, but these errors were encountered: