-
Notifications
You must be signed in to change notification settings - Fork 34
Allure Reports
En this page, we will go into detail on all the data that Allure has generated and presented in interactive reports after executing tests with RESTest. Allure, a powerful reporting tool, provides us with visually appealing and easy-to-understand displays of our test results. We will thoroughly analyze each aspect of the reports, including key metrics, interactive charts, and a complete history of executions, allowing us to gain a deep understanding of the state and quality of our software. With these valuable insights at our disposal, we will be able to quickly identify any issues and make informed decisions to enhance the quality and performance of our project.
A continuación, exploraremos en detalle cada uno de los apartados que ofrece el informe generado por Allure.
In this section, we will not only obtain a high-level overview of the test execution but also include additional aspects that provide a more comprehensive understanding of the current testing status. The following information will be presented:
-
Test Environment Details: This will include the name of the API being tested, the selected test generator, and the paths to the configuration files and OpenAPI Specification (OAS) used for testing. Understanding the test environment is crucial for replicating and troubleshooting issues.
-
Number of Tests per Operation: We will see the distribution of tests across different API operations, helping us identify which endpoints have received more testing focus and which might need additional attention.
-
Test Categories and Suites: There will be a brief glimpse of test categorization and suites. This will act as a teaser, providing an indication of the various aspects that will be explored in greater detail in other sections.
-
Execution Time: The total time taken to execute all the tests will be displayed, allowing us to evaluate the efficiency and duration of the test execution.
-
Percentage of Successful Tests: We will see the percentage of tests that were successfully completed without encountering errors or failures, which is a key indicator of software stability.
-
Percentage of Failed Tests: The percentage of tests that encountered errors or failures during execution will also be shown, helping us identify problematic areas that need attention.
-
Other Relevant Metrics: In addition to the above, other pertinent statistics, such as the number of skipped tests, average execution time per test, or any custom metrics deemed relevant for the project, will be included in this section.
This comprehensive report will provide a clear and concise snapshot of the overall test status, enabling us to quickly identify potential issues or areas for improvement. The information presented here will equip us to make informed decisions and take specific actions to enhance the quality and reliability of our software.
In this section, the tests are organized into specific categories that help us better understand the test results. Each category is identified by a name and defined by a set of criteria corresponding to test error messages or statuses.
-
SEVERITY 3: Status 5XX with valid request Message Regex: Status code 5XX with valid request. Matched Statuses: failed
-
SEVERITY 2: Status 5XX with invalid request Message Regex: Status code 5XX with invalid request.* Matched Statuses: failed
-
SEVERITY 2: Status 5XX Message Regex: Status code 5XX. Matched Statuses: failed
-
WARNING: Status 400 with (possibly) valid request Message Regex: This test case's input was _possibly _correct.* Matched Statuses: failed
-
SEVERITY 1: Status 2XX with invalid request (inter-parameter dependency violated) Message Regex: This faulty test case was expecting a 4XX status code.inter_parameter_dependency. Matched Statuses: failed
-
SEVERITY 1: Status 2XX with invalid request (invalid parameter) Message Regex: This faulty test case was expecting a 4XX status code.(individual_parameter_constraint|invalid_request_body). Matched Statuses: failed
-
SEVERITY 1: Disconformity with OAS Message Regex: OAS disconformity.* Matched Statuses: failed
-
Ignored tests Matched Statuses: skipped
-
Broken tests Matched Statuses: broken
In this section of the Allure report, the tests will be automatically grouped into corresponding categories based on the error messages generated during the test execution and their statuses (failed, skipped, or broken). This will provide a clear and structured perspective of how different types of tests are distributed and their outcomes, enabling teams to quickly identify problematic areas and opportunities for improvement in the software under test.
In the "Suites" section, the tests are grouped into suites. A suite can represent a collection of related tests that are executed together or focus on a specific functional area of the application. Suites provide a means for organized and structured organization and navigation of the tests.
Each suite groups tests that share common characteristics, such as testing a specific feature, module, or user scenario. This allows for better management and easier identification of test sets related to particular functionalities, ensuring a more efficient testing process.
In the "Suites" section of the Allure report, users will be able to access and view detailed information about each suite, including the number of tests included, their execution status, and any associated metrics. This organized presentation enables teams to quickly assess the testing coverage of different areas in the application and identify potential gaps or areas that require additional attention.
Overall, the "Suites" section enhances the structure and comprehensibility of the test execution results, facilitating effective collaboration among team members and providing a clear overview of the overall testing process.