Replies: 17 comments 25 replies
-
The test runner is something that’s morphed from just quick thing to print results on the command line to the current kludge
1, have a global flag where I can disable skips, I want to be able to test new CAD versions to see what’s been fixed. I had started it with TEST_LEVEL and makeSkip 2, Have the “read only” test media open for the session, like dbc.dbs["06457"] 3, I was planning to add a folder that I can use to test object creation, i.e. a new database or sheet set, but is ignored by GIT. Now, the objects are just spammed to the current document. 4, I had turned off testing for older CAD versions, I don’t know if that was a good idea as some people have perpetual licenses and will probably want to know if something works. Then there’s the whole can of worms of when to drop support for CAD versions. I might want to add this back in somehow with a flag Also, I was thinking about how to use TEST_LEVEL or another flag to disable long running tests, or separate regression from unit testing |
Beta Was this translation helpful? Give feedback.
-
I don't know if it's the best way, but it's probably one of the canonical ones: # test_a.py
import pytest
@pytest.mark.known_fail_zrx25
def test_a():
raise RuntimeError("This test should fail") # conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption(
"--run-known-fail-zrx25",
action="store_true",
default=False,
help="Run tests marked as known_fail_zrx25",
)
def pytest_collection_modifyitems(config, items):
if not config.getoption("--run-known-fail-zrx25"):
skip_known_fail = pytest.mark.skip(reason="Skipping tests marked as known_fail_zrx25")
for item in items:
if "known_fail_zrx25" in item.keywords:
item.add_marker(skip_known_fail)
def pytest_configure(config):
# register an additional marker
config.addinivalue_line("markers", "known_fail_zrx25: This test is known to fail on zrx25") # test_runner.py
import pytest
FLAGS = 1 # from sys.argv
if __name__ == "__main__":
if FLAGS & 1:
pytest.main(["-vv"]) # Don't run tests marked as known_fail_zrx25 etc.
elif FLAGS & 2:
pytest.main(["-vv", "--run-known-fail-zrx25"]) # run tests marked as known_fail_zrx25 etc. FLAGS = 1
FLAGS = 2
|
Beta Was this translation helpful? Give feedback.
-
Wondering it there’s a better way to organized tests. Thinking out loud Have a root folder pyrxtest, that holds the media, results, and configuration The test runner would traverse the sub folders, and automatically add test files it finds, and runs them Setup a whole new test folder ‘pyrxtest’, then the old tests could be migrated over -pyrxtest adding a test would be as simple as adding a new file, the runner would pick it up |
Beta Was this translation helpful? Give feedback.
-
I guess this is kind of what I was thinking of https://docs.pytest.org/en/stable/explanation/goodpractices.html#python-test-discovery |
Beta Was this translation helpful? Give feedback.
-
There will probably be tools that are not ARX-based, so there is no need to test them on all hosts (but they can still be tested in the host environment). Do you think it's a good idea to keep these tests next to tests that are host-dependent, but marked e.g. with |
Beta Was this translation helpful? Give feedback.
-
I don't understand how this works: Lines 37 to 47 in 0801469 It looks like you're skipping a test on all hosts if it fails on at least one. Do you change this code manually depending on the host? |
Beta Was this translation helpful? Give feedback.
-
Looks like I messed that up lol, I was attempting to make a bitwise flag to make the skip decorator easier
|
Beta Was this translation helpful? Give feedback.
-
The old tests are in disarray, so it will take some time to get them organized and moved. I can do the tedious stuff. |
Beta Was this translation helpful? Give feedback.
-
I tried your commit, did sub-directories too, totally awesome!!! Let me know if there are any flags I need to catch/process. There’s flags in config that may be useful dev_mode, tracemalloc, parser_debug, not sure. I can also put CAD in a “test” state, possibly enable stuff inside C++... TBD |
Beta Was this translation helpful? Give feedback.
-
I think you should create a |
Beta Was this translation helpful? Give feedback.
-
I get the command line arguments in before Python is initialized, I would like the option to setup certain flags that may be helpful, for example ./run_tests –tf0xFFFFFFFF --hosts acad, where I can unpack the hex to setup certain conditions. I.e. Of course, release compiles should test with no flags so we’re testing what the client gets. If not no worries, I can use my .INI, command like is cooler though
I don’t think it’s necessary, it’s already isolated. I’ll stay out of it until you say it’s ready |
Beta Was this translation helpful? Give feedback.
-
Moving
|
Beta Was this translation helpful? Give feedback.
-
Excellent, I get the hex flags! |
Beta Was this translation helpful? Give feedback.
-
I'll look at the tests in more detail, maybe I'll find some things to improve (simplify). And I understand that you want to take care of the transfer then..? |
Beta Was this translation helpful? Give feedback.
-
I assume this is a bad test: PyRx/unitTests/UnitTestMisc.py Line 28 in c45c8fc You probably meant: self.assertTrue(img.IsOk(), True)
assert bool(img.IsOk) is True and almost every object in python gives a boolean value If you used assert img.IsOk is True then I know, I can check it, but I wanted to write it anyway :) |
Beta Was this translation helpful? Give feedback.
-
One thing I noticed is that Doesn’t work for me, but this does When I work in C++, I have everything uninstalled, and I set my stub path to the repository, I guess this is the reason.
|
Beta Was this translation helpful? Give feedback.
-
Seems there’s tests that will need to run under a specific context The old test probably also fails in session |
Beta Was this translation helpful? Give feedback.
-
I noticed a few things to improve in unit tests.
pytest
seems better thanunittest
e.g., you can
assert True
instead ofself.assertTrue(True)
Collecting tests can be done more simply than it is currently
An example test runner might look like this:
Although you are using
unittest
, you can run tests usingpytest
, excerpts from thepytest.log
file:All such functions are unnecessary:
Assuming automatic test collection, the imports will need to be corrected:
I can help with this, you just need to present what concept you had in mind.
Beta Was this translation helpful? Give feedback.
All reactions