-
-
Notifications
You must be signed in to change notification settings - Fork 6
dasharo-performance-parallelable: Add proof of concept for parallel tests #1130
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
92f21fc
to
5f8f666
Compare
It's a proof of concept for running test cases in parallel, with common execution steps. This apporach, while might look complicated and non-axiomatic, can allow for devastating improvements in test suite execution times. By using the supported test cases dict, _CANARY_ and _PSEUDO_ test cases the actual test scope can be determined, then all the tests can be performed in parallel in any way we find reasonable. Then the actual test cases can access suite variables created by the _PSEUDO_ test cases to determine whether they PASS or not. Signed-off-by: Filip Gołaś <filip.golas@3mdeb.com>
5f8f666
to
1988ada
Compare
Signed-off-by: Filip Gołaś <filip.golas@3mdeb.com>
Added the |
Signed-off-by: Filip Gołaś <filip.golas@3mdeb.com>
3c685d6
to
3eb3d7e
Compare
Added the |
Added the option for They are available as Tested using a temp test case: *** Settings ***
Default Tags automated
*** Test Cases ***
TEST001.001 Test scanning args
${TEST_CASES}= Evaluate list(${TEST_CASES})
${TEST_TAGS}= Evaluate list(${TEST_TAGS})
Log To Console \ntags: ${TEST_TAGS}
Log To Console \ncases: ${TEST_CASES}
FOR ${case} IN @{TEST_CASES}
Log To Console \tcase: ${case}
END
CPT001.201
Pass Execution 1
CPF005.201
Pass Execution 2
ABC000.000 automated test
Pass Execution 3
ABC001.000 Not-automated case
[Tags] semiauto
Pass Execution 4
==============================================================================
Test
==============================================================================
TEST001.001 Test scanning args ..
tags: ['automated']
.
cases: ['TEST001.001', 'CPT001.201', 'ABC000.000', 'ABC001.000']
. case: TEST001.001
case: CPT001.201
case: ABC000.000
case: ABC001.000
TEST001.001 Test scanning args | PASS |
------------------------------------------------------------------------------
CPT001.201 | PASS |
1
------------------------------------------------------------------------------
ABC000.000 automated test | PASS |
3
------------------------------------------------------------------------------
Test | PASS |
3 tests, 3 passed, 0 failed
============================================================================== logs for a good measure: The next step is to integrate this into the proof of concept, which should become much simpler |
With 6125b76 there is no real overhead for running tests in parallel.
Other than that, there are a couple Keyword tools to check what tests should be run and decide which steps to perform in parallel gathering tasks. All the keywords and variables related to managing this were moved to a library file. I don't think there is a smart way of simplifying places like the I'd like to ephasize, that this approach is only needed for tests which require synchronization. SSH test cases, when they don't reboot, or in any other way affect the shared resource (device) can simply be run in parallel using I am not sure how to keep a single definition of a test case and somehow treat it differently when it comes to parallelisation if it's performed via SSH. SSH test cases don't need the constructions suggested in this PR if they don't affect the device state, so they shouldn't be joined into large suites like the This approach would be also needed if there are a couple test cases in a suite, where at least one depends on any other, or if any single one reboots or in other way affects the device. |
6125b76
to
f70ee95
Compare
Signed-off-by: Filip Gołaś <filip.golas@3mdeb.com>
2376d06
to
3a39e7a
Compare
Added the tests under load (reusing all gathering code) and stability tests (extending existing gathering code to run it in parallel with temp&freq measurements) to show how much effort does making parallel test cases actually take |
ea85cac
to
294c669
Compare
Signed-off-by: Filip Gołaś <filip.golas@3mdeb.com>
Signed-off-by: Filip Gołaś <filip.golas@3mdeb.com>
Signed-off-by: Filip Gołaś <filip.golas@3mdeb.com>
Signed-off-by: Filip Gołaś <filip.golas@3mdeb.com>
Signed-off-by: Filip Gołaś <filip.golas@3mdeb.com>
294c669
to
ff07e33
Compare
Signed-off-by: Filip Gołaś <filip.golas@3mdeb.com>
f0fcf20
to
1bb4793
Compare
…e gather steps Signed-off-by: Filip Gołaś <filip.golas@3mdeb.com>
e876f6e
to
06145d2
Compare
Signed-off-by: Filip Gołaś <filip.golas@3mdeb.com>
06145d2
to
373bfd3
Compare
Signed-off-by: Filip Gołaś <filip.golas@3mdeb.com>
Added checking the power source: NA/Battery/AC/USB-PD |
It's a proof of concept for running test cases in parallel, with common
execution steps. This approach, while might look complicated and
unconventional, can allow for devastating improvements in test suite
execution times.
By using the supported test cases dict, PARALLEL test cases
the actual test scope can be determined, then all the tests can be
performed in parallel in any way we find reasonable.
Then the actual test cases can access suite variables created
by the PARALLEL test cases to determine whether they PASS or not.
TODO before merging: