You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
btw I got an idea, what might compliment our deprecation feature. If the period of the deprecation is close to expire, we might dynamically switch the deprecation warning instance to be raise (i.e. SevereDeprecationWarning) and set in pytest.ini to fail tests when this SevereDeprecationWarning captured
Let's assume some user has in-house actor, which uses SomeModel. And this actor covered by tests.
In this case, before 2021-06-01 the test will be passing, just warnings will be notified, but after 2021-06-01 tests will start failing.
Nothing more - the actor will still work, only tests will fail. If there aren't tests, then no warnings and no failures at all (until we really remove the model at least).
This mechanism could also allow us to trace suppressed deprecated warnings which have expired deprecation notice (our tests will start failing). And we would need to update the date.
The text was updated successfully, but these errors were encountered:
Doesn't that mean, that after the 6m period, users will be in a similar situation as in case we remove the functionality? Keep in mind that when we deprecate something, for user it could mean they have in bed scenario just 1month to fix the code. That is not so good I guess. For users it should be ~6month since they receive the product. That's why we are not telling that we will remove the functionality right after the 6m period. We want to keep the original functionality until we remove it.
Let's assume some user has in-house actor, which uses SomeModel. And this actor covered by tests.
In this case, before 2021-06-01 the test will be passing, just warnings will be notified, but after 2021-06-01 tests will start failing.
Nothing more - the actor will still work, only tests will fail. If there aren't tests, then no warnings and no failures at all (until we really remove the model at least).
This mechanism could also allow us to trace suppressed deprecated warnings which have expired deprecation notice (our tests will start failing). And we would need to update the date.
To be honest, I do not know. I've got your idea, but I am not sure whether it's good or wrong way. It's not common to do anything like that in projects, so I am not so much happy about anything like that.
btw I got an idea, what might compliment our deprecation feature. If the period of the deprecation is close to expire, we might dynamically switch the deprecation warning instance to be raise (i.e. SevereDeprecationWarning) and set in pytest.ini to fail tests when this SevereDeprecationWarning captured
Originally posted by @zhukovgreen in https://github.com/oamg/leapp/diffs
Let's assume some user has in-house actor, which uses SomeModel. And this actor covered by tests.
In this case, before 2021-06-01 the test will be passing, just warnings will be notified, but after 2021-06-01 tests will start failing.
Nothing more - the actor will still work, only tests will fail. If there aren't tests, then no warnings and no failures at all (until we really remove the model at least).
This mechanism could also allow us to trace suppressed deprecated warnings which have expired deprecation notice (our tests will start failing). And we would need to update the date.
The text was updated successfully, but these errors were encountered: