-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Discrepancy in eval_end2end_ace2005e #18
Comments
It would be easier for us to identify your issue if you could save the models generated text and share some samples in the thread. |
The generative model I'm utilizing was acquired from pre-trained models (DEGREE_e2e_ace05e.mdl) within the code, and it wasn't trained by me personally. Now I can furnish you with the log files for your examination.
|
In order to better address the problem, I would appreciate it if you could provide me with your email address. I'll be able to send you the log files to resolve this issue more thoroughly. |
Notice that sometimes package version matters as well. So please check if you packages match ours. Especially for |
Hi,
I'm encountering a puzzling problem while running the eval_end2end_ace2005e.py with the DEGREE_e2e_ace05e.mdl pretrained model. The evaluation results I'm obtaining are significantly different from the results reported in the associated paper, even though I have not made any modifications to the model itself.
Has anyone else encountered a similar problem when running the eval_end2end_ace2005e with pretrained model?
Could you kindly provide insights into potential areas that might lead to such discrepancies? Are there any data processing intricacies that could be causing this?
Any assistance in troubleshooting this matter would be greatly appreciated.
The text was updated successfully, but these errors were encountered: