Skip to content

chore(deps): bump actions/checkout from 3 to 4 #149

chore(deps): bump actions/checkout from 3 to 4

chore(deps): bump actions/checkout from 3 to 4 #149

Triggered via pull request September 4, 2023 21:50
Status Failure
Total duration 3m 43s
Artifacts

tests.yaml

on: pull_request
Fit to window
Zoom out
Zoom in

Annotations

5 errors
tests: tests/unit/test_all_of.py#L14
test_resolve_should_resolve_if_all_conjectures_resolves_truly hypothesis.errors.Flaky: Hypothesis test_resolve_should_resolve_if_all_conjectures_resolves_truly(return_values=[True, True, True, False, False, True, False]) produces unreliable results: Falsified on the first call but did not on a subsequent one Falsifying example: test_resolve_should_resolve_if_all_conjectures_resolves_truly( return_values=[True, True, True, False, False, True, False], ) Unreliable test timings! On an initial run, this test took 456.69ms, which exceeded the deadline of 200.00ms, but on a subsequent run it took 1.45 ms, which did not. If you expect this sort of variability in your test timings, consider turning deadlines off for this test by setting deadline=None. You can reproduce this example by temporarily adding @reproduce_failure('6.82.7', b'AAEBAQEBAQEAAQABAQEAAA==') as a decorator on your test case
tests: tests/unit/test_all_of_conjecture.py#L14
test_resolve_should_resolve_if_all_conjectures_resolves_truly hypothesis.errors.Flaky: Hypothesis test_resolve_should_resolve_if_all_conjectures_resolves_truly(return_values=[False, True, False, False, False, False, False]) produces unreliable results: Falsified on the first call but did not on a subsequent one Falsifying example: test_resolve_should_resolve_if_all_conjectures_resolves_truly( return_values=[False, True, False, False, False, False, False], ) Unreliable test timings! On an initial run, this test took 418.01ms, which exceeded the deadline of 200.00ms, but on a subsequent run it took 1.19 ms, which did not. If you expect this sort of variability in your test timings, consider turning deadlines off for this test by setting deadline=None. You can reproduce this example by temporarily adding @reproduce_failure('6.82.7', b'AAEAAQEBAAEAAQABAAYBAAA=') as a decorator on your test case
tests: tests/unit/test_has_attribute.py#L20
test_should_match_when_attribute_exists hypothesis.errors.Flaky: Hypothesis test_should_match_when_attribute_exists(value='ToYUFT', other=-24367) produces unreliable results: Falsified on the first call but did not on a subsequent one Falsifying example: test_should_match_when_attribute_exists( value='ToYUFT', other=-24367, ) Unreliable test timings! On an initial run, this test took 480.62ms, which exceeded the deadline of 200.00ms, but on a subsequent run it took 0.53 ms, which did not. If you expect this sort of variability in your test timings, consider turning deadlines off for this test by setting deadline=None. You can reproduce this example by temporarily adding @reproduce_failure('6.82.7', b'AAETASgFARgGARQEAQUCATcTBwAHBAcDAL5f') as a decorator on your test case
tests: tests/unit/test_instance_of.py#L1
[pylint] tests/unit/test_instance_of.py E: 14,10: unsupported operand type(s) for | (unsupported-binary-operation)
tests
Process completed with exit code 2.