Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update handling of result files #21

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

pclausen
Copy link
Contributor

@pclausen pclausen commented Nov 7, 2021

Update handling of results files, which causes issues see #20

The problem is that Compiler errors are always "normalized" to work better in the run-tests case, but normalization (removal of line numbers and spaces etc) always happened. This PR moves the normalization from run.sh to run-tests.sh and should thereby solve #20

@pclausen pclausen requested a review from a team as a code owner November 7, 2021 11:23
@pclausen pclausen changed the title Update result files (WIP) Update handling of result files Nov 7, 2021
@@ -2,5 +2,5 @@
"version": 2,
"status": "error",
"tests": [],
"message": "/tmp/example-empty-file/example_empty_file_test.f:::\\\\n\\\\n | use bob\\\\n | \\\\nFatal Error: Cannot open module file bob.mod for reading at (): No such file or directory\\\\ncompilation terminated.\\\\nmake[]: *** [CMakeFiles/example-empty-file.dir/build.make:: CMakeFiles/example-empty-file.dir/example_empty_file_test.f.o] Error \\\\nmake[]: *** [CMakeFiles/Makefile:: CMakeFiles/example-empty-file.dir/all] Error \\\\nmake: *** [Makefile:: all] Error \\\\n"
"message": "/tmp/example-empty-file/example_empty_file_test.f::: | use bob | Fatal Error: Cannot open module file bob.mod for reading at (): No such file or directorycompilation terminated.make[]: *** [CMakeFiles/example-empty-file.dir/build.make:: CMakeFiles/example-empty-file.dir/example_empty_file_test.f.o] Error make[]: *** [CMakeFiles/Makefile:: CMakeFiles/example-empty-file.dir/all] Error make: *** [Makefile:: all] Error "
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you've now stripped the newlines entirely, which is not what we want I think :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, only for the test case. I moved the the "stripping" of newlines etc from run.sh to run-test.sh. Thus, for the "real" case using "run.sh" we will have newlines and line numbers etc.

Maybe this whole stripping of the json file makes the test meaningless, but my problem is that the errors are different within docker vs local execution. If you have a suggestion for a better solution I would be very happy to implement it if its within my ability.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I see. The main concern I have with that is that the expected result files no longer exactly indicate what is shown to the student. We'd thus never know for certain whether what's displayed to the student is the correct value.

If you'd like me to take a look, I'd be happy to of course.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ErikSchierboom I agree and i would like if you could have a look. My main problem is that I do not know docker well enough to know if there is a way to make it behave more like the "normal" linux system. If the two systems behaved the same, these stripping of results would not be necessary.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Docker should actually behave lik a normal linux system. I'll take a look.

@ErikSchierboom
Copy link
Member

@pclausen I'm looking into this, but I'm seeing some weirdness. Having checked out this PR and then running./bin/run-tests-in-docker.sh, the results.json files that are produced are not valid JSON:

 {
   "tests": [
     {
       "name"     : "Test 1: stating something",
       "test_code": "Test 1: stating something",
       "status"   : "fail",
       "message"  : "Expected < Whatever. > but got < Whatever. THIS IS A FAILING TEST >"
     }
     ,
     {
       "name"     : "Test 2: shouting",
       "test_code": "Test 2: shouting",
       "status"   : "pass",
     }
     ,
     {
       "name"     : "Test 3: shouting gibberish",
       "test_code": "Test 3: shouting gibberish",
       "status"   : "pass",
     }
     ,
     {
       "name"     : "Test 4: asking a question",
       "test_code": "Test 4: asking a question",
       "status"   : "pass",
     }
     ,
     {
       "name"     : "Test 5: asking a numeric question",
       "test_code": "Test 5: asking a numeric question",
       "status"   : "pass",
     }
     ,
     {
       "name"     : "Test 6: asking gibberish",
       "test_code": "Test 6: asking gibberish",
       "status"   : "pass",
     }
     ,
     {
       "name"     : "Test 7: talking forcefully",
       "test_code": "Test 7: talking forcefully",
       "status"   : "fail",
       "message"  : "Expected < Whatever. > but got < Whatever. THIS IS A FAILING TEST >"
     }
     ,
     {
       "name"     : "Test 8: using acronyms in regular speech",
       "test_code": "Test 8: using acronyms in regular speech",
       "status"   : "fail",
       "message"  : "Expected < Whatever. > but got < Whatever. THIS IS A FAILING TEST >"
     }
     ,
     {
       "name"     : "Test 9: forceful question",
       "test_code": "Test 9: forceful question",
       "status"   : "pass",
     }
     ,
     {
       "name"     : "Test 10: shouting numbers",
       "test_code": "Test 10: shouting numbers",
       "status"   : "pass",
     }
     ,
     {
       "name"     : "Test 11: no letters",
       "test_code": "Test 11: no letters",
       "status"   : "fail",
       "message"  : "Expected < Whatever. > but got < Whatever. THIS IS A FAILING TEST >"
     }
     ,
     {
       "name"     : "Test 12: question with no letters",
       "test_code": "Test 12: question with no letters",
       "status"   : "pass",
     }
     ,
     {
       "name"     : "Test 13: shouting with special characters",
       "test_code": "Test 13: shouting with special characters",
       "status"   : "pass",
     }
     ,
     {
       "name"     : "Test 14: shouting with no exclamation mark",
       "test_code": "Test 14: shouting with no exclamation mark",
       "status"   : "pass",
     }
     ,
     {
       "name"     : "Test 15: statement containing question mark",
       "test_code": "Test 15: statement containing question mark",
       "status"   : "fail",
       "message"  : "Expected < Whatever. > but got < Whatever. THIS IS A FAILING TEST >"
     }
     ,
     {
       "name"     : "Test 16: non-letters with question",
       "test_code": "Test 16: non-letters with question",
       "status"   : "pass",
     }
     ,
     {
       "name"     : "Test 17: prattling on",
       "test_code": "Test 17: prattling on",
       "status"   : "pass",
     }
     ,
     {
       "name"     : "Test 18: silence",
       "test_code": "Test 18: silence",
       "status"   : "pass",
     }
     ,
     {
       "name"     : "Test 19: prolonged silence",
       "test_code": "Test 19: prolonged silence",
       "status"   : "pass",
     }
     ,
     {
       "name"     : "Test 20: multiple line question",
       "test_code": "Test 20: multiple line question",
       "status"   : "fail",
       "message"  : "Expected < Whatever. > but got < Whatever. THIS IS A FAILING TEST >"
     }
     ,
     {
       "name"     : "Test 21: starting with whitespace",
       "test_code": "Test 21: starting with whitespace",
       "status"   : "fail",
       "message"  : "Expected < Whatever. > but got < Whatever. THIS IS A FAILING TEST >"
     }
     ,
     {
       "name"     : "Test 22: ending with whitespace",
       "test_code": "Test 22: ending with whitespace",
       "status"   : "pass",
     }
     ,
     {
       "name"     : "Test 23: non-question ending with whitespace",
       "test_code": "Test 23: non-question ending with whitespace",
       "status"   : "fail",
       "message"  : "Expected < Whatever. > but got < Whatever. THIS IS A FAILING TEST >"
     }
   ],
   "version": 2,
   "status" : "fail"
   "message": "Test summary: 8 of 23 tests failed",
 }

For example,

{
   "name"     : "Test 18: silence",
   "test_code": "Test 18: silence",
   "status"   : "pass",
 }

has a trailing comma (which is illegal in JSON) and

"version": 2,
"status" : "fail"
"message": "Test summary: 8 of 23 tests failed",

is missing a comma after "fail". Maybe you could fix?

SaschaMann added a commit to exercism/fortran that referenced this pull request Mar 29, 2022
ErikSchierboom pushed a commit to exercism/fortran that referenced this pull request Mar 29, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants