Skip to content

Commit 9210141

Browse files
committed
Updated README
1 parent 115442f commit 9210141

File tree

1 file changed

+93
-32
lines changed

1 file changed

+93
-32
lines changed

README.md

Lines changed: 93 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -16,53 +16,114 @@ This is a very simple, lightweight API that can be set up to persist test runs i
1616

1717
Although written with plugins for C#, the API itself is agnostic and can accept check and result submissions from any language / technology.
1818

19-
When integrated with your test project, the API will be queried before each test is run to determine whether or not it _should_ be run, based on the criteria you have defined.
19+
When integrated with your test project, the API will be queried before each test runs, to determine whether or not it _should_ be run, based on the criteria you have defined.
2020

2121
In CI/CD pipelines and / or large-scale automated test suites, this tool can be useful in minimising continual 're-running' of long tests that are deemed fairly reliable.
2222

23-
For example, you may have several long-running tests that block out the rest of your CI/CD test pipeline, or length the overall testing step(s) by an order of magnitude.
23+
For example, you may have several long-running tests that block out the rest of your CI/CD test pipeline, or lengthen the overall testing step(s) by an order of magnitude.
2424

25-
Using the Not-Again API, you could conditionally run the test(s) such that:
26-
* If a test has run in the past X days, and passed, then do not re-run it
25+
Using the `Not-Again` API, you could conditionally run the test(s) such that:
26+
* If a test has run in the past _X_ days, and passed, then do not re-run it
2727
* If it has been modified since the last run, then re-run it, regardless
28-
* Otherwise just re-run it
28+
* Otherwise, as a failsafe, just re-run it
2929

3030
### Examples
31-
#### Example - first test run
31+
#### Examples of affirmative responses
3232
```mermaid
3333
sequenceDiagram
3434
My Test->>+Not-Again: Run this test? Threshold is 10 days
35-
Not-Again->>+Database: Find last test run
36-
Database-->>-Not-Again: No test run found, must be a new test
37-
Not-Again->>-My Test: Yes
35+
Not-Again->>+Database: Find last test run
36+
Note right of Database: No test found
37+
Note right of Database: Test run found but last run did not pass
38+
Note right of Database: Test run found but older than 10 days
39+
rect rgb(64, 95, 45)
40+
Database-->>-Not-Again: YES
41+
Not-Again->>-My Test: YES
42+
end
43+
Note left of My Test: Run the test
3844
My Test->>+Not-Again: Test completed with this result
3945
Not-Again->>-Database: Store this result
4046
```
41-
#### Example - subsequent (fresh) test result run
47+
#### Examples of negative responses
4248
```mermaid
4349
sequenceDiagram
4450
My Test->>+Not-Again: Run this test? Threshold is 10 days
45-
Not-Again->>+Database: Find last test run
46-
Database-->>-Not-Again: Test run found from 4 days ago, unmodified
47-
Not-Again->>-My Test: No
51+
Not-Again->>+Database: Find last test run
52+
Note right of Database: Test found and passed, newer than 10 days
53+
rect rgb(95, 45, 45)
54+
Database-->>-Not-Again: NO
55+
Not-Again->>-My Test: NO
56+
end
57+
Note left of My Test: Skip the test
4858
```
49-
#### Example - subsequent (stale) test result run
50-
```mermaid
51-
sequenceDiagram
52-
My Test->>+Not-Again: Run this test? Threshold is 10 days
53-
Not-Again->>+Database: Find last test run
54-
Database-->>-Not-Again: Test run found from 25 days ago, unmodified
55-
Not-Again->>-My Test: Yes
56-
My Test->>+Not-Again: Test completed with this result
57-
Not-Again->>-Database: Store this result
59+
60+
61+
62+
# Installation / usage
63+
## Running the API
64+
* This repo contains the entire source code for the API, as well as the domain model and database migrations to set up the required database
65+
* Alternatively, you can also find the API docker image at [this link](https://hub.docker.com/repository/docker/gman82/not-again-api)
66+
* When running either of the above, you will need to pass a single environment variable `ConnectionStrings__NOT-AGAIN` (either via [Docker environment variables](https://docs.docker.com/compose/environment-variables/set-environment-variables/), or `dotnet` runtime variables)
67+
* This value will be the [database connection string](https://www.connectionstrings.com/sql-server/) that the API has access to in order to persist the test data
68+
### Docker command example
69+
```powershell
70+
docker run -d -p 80:80 -e ConnectionStrings__NOT-AGAIN='<MY_DATABASE_CONNECTION_STRING>' gman82/not-again-api:latest
5871
```
59-
#### Example - test result run that has been modified
60-
```mermaid
61-
sequenceDiagram
62-
My Test->>+Not-Again: Run this test? Threshold is 10 days
63-
Not-Again->>+Database: Find last test run
64-
Database-->>-Not-Again: Test run found from 2 days ago, modified since
65-
Not-Again->>-My Test: Yes
66-
My Test->>+Not-Again: Test completed with this result
67-
Not-Again->>-Database: Store this result
68-
```
72+
<sub>NOTE: you may need to use **single quotes** in the connection string to escape the equals signs.</sub>
73+
74+
75+
# Configuring your tests to use Not-Again
76+
## .NET via NuGet package
77+
### [Not.Again.NUnit](https://www.nuget.org/packages/Not.Again.NUnit/)
78+
* Install the above NuGet package into your test project
79+
* As shown in the [NUnit sample test project](https://github.com/gman-au/not-again/blob/master/src/6.0/Sample.NUnit.Test.Project/BasicTests.cs), add or update your `[SetUp]` and `[TearDown]` methods (which are configured to run prior and after every _test_) to integrate with the Not-Again API:
80+
```C#
81+
[SetUp]
82+
public async Task Setup() => await NotAgain.SetupAsync();
83+
84+
[TearDown]
85+
public async Task Teardown() => await NotAgain.TearDownAsync();
86+
87+
... (the rest of your tests)
88+
```
89+
### Running the tests
90+
When running the tests (i.e. `dotnet test`), the following environment variables will need to be supplied:
91+
* `NOT_AGAIN_URL` - this is the base URL of your `Not-Again` API host e.g. `https://localhost`
92+
* `RERUN_TESTS_OLDER_THAN_DAYS` _(optional)_ - this value specifies the threshold, in days, at which a passing test is deemed 'stale' and should be re-run. Tests with run dates falling _inside_ this threshold will not be re-run.
93+
94+
> [!IMPORTANT]
95+
> If `RERUN_TESTS_OLDER_THAN_DAYS` is omitted, it will trigger **all** of your tests to re-run, regardless.
96+
97+
So, for example, the following command would pass the above variables through to the testing environment:
98+
```powershell
99+
dotnet test My.Test.Project\My.Test.Project.csproj -e NOT_AGAIN_URL=https://localhost -e RERUN_TESTS_OLDER_THAN_DAYS=30
100+
```
101+
102+
The test logger should provide feedback for each test.
103+
104+
<p align="center">
105+
<img style="border-radius:10px;" width="600" src="https://github.com/user-attachments/assets/f7571dc1-cfa5-4247-a2e0-b3d92e2b5760" />
106+
</p>
107+
108+
## API integration
109+
While the NuGet package above is obviously .NET limited, it is simply a module that simplifies the HTTP requests to the `Not-Again` API; the API itself is not limited to .NET at all, and can accept HTTP requests from any testing platform.
110+
111+
The API has two simple endpoints:
112+
* `/Diagnostic/RunCheck` - HTTP POST expecting a [`RunCheckRequest`](https://github.com/gman-au/not-again/blob/master/src/6.0/Not.Again.Contracts/RunCheckRequest.cs) payload
113+
* This request returns a [`DiagnosticResponse`](https://github.com/gman-au/not-again/blob/master/src/6.0/Not.Again.Contracts/DiagnosticResponse.cs) payload if successful
114+
* `/Diagnostic/ReportResult` - HTTP POST expecting a [`SubmitResultRequest`](https://github.com/gman-au/not-again/blob/master/src/6.0/Not.Again.Contracts/SubmitResultRequest.cs) payload (does not return a payload)
115+
116+
# Further development
117+
## Analytics dashboard
118+
The domain model, in its current form, is effectively gathering test metrics over time i.e:
119+
* The test characteristics
120+
* How long the test took to run
121+
* The eventual result of the test (pass, fail, inconclusive)
122+
* The run date of the test
123+
124+
With these metrics, there is no reason why future features could not leverage this data into a fully-fledged test analytics / QA dashboard, in the vein of similar paid products:
125+
* [Cypress Cloud](https://www.cypress.io/cloud)
126+
* [Currents.dev](https://currents.dev)
127+
* [Microsoft Playwright Testing](https://azure.microsoft.com/en-au/products/playwright-testing)
128+
129+
In fact, the primary driver for this platform was the absence of a .NET equivalent for the above (I really thought Microsoft Playwright Testing would support .NET!)

0 commit comments

Comments
 (0)