-
Notifications
You must be signed in to change notification settings - Fork 34
Allure Reports
En this page, we will go into detail on all the data that Allure has generated and presented in interactive reports after executing tests with RESTest. Allure, a powerful reporting tool, provides us with visually appealing and easy-to-understand displays of our test results. We will thoroughly analyze each aspect of the reports, including key metrics, interactive charts, and a complete history of executions, allowing us to gain a deep understanding of the state and quality of our software. With these valuable insights at our disposal, we will be able to quickly identify any issues and make informed decisions to enhance the quality and performance of our project.
Una vez que el proceso de ejecución de RESTest llega a su finalización, se produce la generación automática del informe Allure, el cual se presenta en un formato HTML interactivo y altamente informativo. Este informe se almacena cuidadosamente en el directorio target/allure-reports/index.html.
A continuación, exploraremos en detalle cada uno de los apartados que ofrece el informe generado por Allure.
In this section, we will not only obtain a high-level overview of the test execution but also include additional aspects that provide a more comprehensive understanding of the current testing status. The following information will be presented:
-
Test Environment Details: This will include the name of the API being tested, the selected test generator, and the paths to the configuration files and OpenAPI Specification (OAS) used for testing. Understanding the test environment is crucial for replicating and troubleshooting issues.
-
Number of Tests per Operation: We will see the distribution of tests across different API operations, helping us identify which endpoints have received more testing focus and which might need additional attention.
-
Test Categories and Suites: There will be a brief glimpse of test categorization and suites. This will act as a teaser, providing an indication of the various aspects that will be explored in greater detail in other sections.
-
Execution Time: The total time taken to execute all the tests will be displayed, allowing us to evaluate the efficiency and duration of the test execution.
-
Percentage of Successful Tests: We will see the percentage of tests that were successfully completed without encountering errors or failures, which is a key indicator of software stability.
-
Percentage of Failed Tests: The percentage of tests that encountered errors or failures during execution will also be shown, helping us identify problematic areas that need attention.
-
Other Relevant Metrics: In addition to the above, other pertinent statistics, such as the number of skipped tests, average execution time per test, or any custom metrics deemed relevant for the project, will be included in this section.
This comprehensive report will provide a clear and concise snapshot of the overall test status, enabling us to quickly identify potential issues or areas for improvement. The information presented here will equip us to make informed decisions and take specific actions to enhance the quality and reliability of our software.
In this section, the tests are organized into specific categories that help us better understand the test results. Each category is identified by a name and defined by a set of criteria corresponding to test error messages or statuses.
-
SEVERITY 3: Status 5XX with valid request Message Regex: Status code 5XX with valid request. Matched Statuses: failed
-
SEVERITY 2: Status 5XX with invalid request Message Regex: Status code 5XX with invalid request.* Matched Statuses: failed
-
SEVERITY 2: Status 5XX Message Regex: Status code 5XX. Matched Statuses: failed
-
WARNING: Status 400 with (possibly) valid request Message Regex: This test case's input was _possibly _correct.* Matched Statuses: failed
-
SEVERITY 1: Status 2XX with invalid request (inter-parameter dependency violated) Message Regex: This faulty test case was expecting a 4XX status code.inter_parameter_dependency. Matched Statuses: failed
-
SEVERITY 1: Status 2XX with invalid request (invalid parameter) Message Regex: This faulty test case was expecting a 4XX status code.(individual_parameter_constraint|invalid_request_body). Matched Statuses: failed
-
SEVERITY 1: Disconformity with OAS Message Regex: OAS disconformity.* Matched Statuses: failed
-
Ignored tests Matched Statuses: skipped
-
Broken tests Matched Statuses: broken
In this section of the Allure report, the tests will be automatically grouped into corresponding categories based on the error messages generated during the test execution and their statuses (failed, skipped, or broken). This will provide a clear and structured perspective of how different types of tests are distributed and their outcomes, enabling teams to quickly identify problematic areas and opportunities for improvement in the software under test.
In the Suites
section, the tests are grouped into suites. A suite can represent a collection of related tests that are executed together or focus on a specific functional area of the application. Suites provide a means for organized and structured organization and navigation of the tests.
Each suite groups tests that share common characteristics, such as testing a specific feature, module, or user scenario. This allows for better management and easier identification of test sets related to particular functionalities, ensuring a more efficient testing process.
In the Suites
section of the Allure report, users will be able to access and view detailed information about each suite, including the number of tests included, their execution status, and any associated metrics. This organized presentation enables teams to quickly assess the testing coverage of different areas in the application and identify potential gaps or areas that require additional attention.
Overall, the Suites
section enhances the structure and comprehensibility of the test execution results, facilitating effective collaboration among team members and providing a clear overview of the overall testing process.
In the Graphs
section, visual charts are presented to summarize the test results. It includes graphs depicting the states of the executed tests, severity distribution, test duration, as well as trends for test duration, categories, and retries.
-
Test States Graph: This graph provides an overview of the distribution of test states, including tests that have passed, failed, or been skipped. It offers a quick visual representation of the overall test execution outcomes.
-
Severity Distribution Graph: This chart illustrates the distribution of test failures based on their severity levels. It categorizes tests into severity levels, such as critical, high, medium, and low, allowing teams to identify the impact of test failures on the application's functionality.
-
Test Duration Graph: This graph displays the duration of each test execution. It helps in identifying tests that might be taking a longer time to run, enabling teams to optimize the testing process.
-
Duration Trends Graph: This graph shows the trends in test duration over time. It provides insights into any changes or fluctuations in test execution times, aiding in identifying potential performance issues or improvements in the testing environment.
-
Categories Trends Graph: This chart presents trends in test categories over time. It helps teams understand the distribution and stability of test categories throughout different test runs.
-
Retries Trends Graph: This graph displays the trends in test retries over time. It helps to track the frequency of test retries and understand their impact on overall test stability and reliability.
These visual representations in the Graphs
section allow for a quick and intuitive understanding of the test results, helping teams make data-driven decisions, identify patterns, and improve the testing process based on performance trends and severity distributions. The graphical overview provides valuable insights that contribute to enhancing the quality and effectiveness of the software testing efforts.
In the Timelin
section, a visual representation of the sequence of events and activities during test execution is displayed. This timeline helps to understand how the tests unfolded over time and aids in identifying any patterns or performance issues.
The timeline showcases each test as a milestone along the timeline, with the horizontal axis representing the chronological progression of the tests from the beginning to the end of the testing process. Each milestone or point on the timeline is labeled with the test name and its execution status, making it easy to identify successful, failed, skipped, and other test categories.
This temporal visualization is particularly valuable for identifying patterns or performance issues in the tests. For example, if there are clusters of tests that take significantly longer than usual, this may indicate areas that require optimization or potential bottlenecks in the testing process.
Moreover, the "Timeline" section features two filters to enhance usability:
-
Execution Time Filter: This filter allows users to sort and filter the tests based on their execution time. Users can choose to view tests in ascending or descending order of execution time, helping them identify both fast and slow-performing tests.
-
Time Interval Filter: This filter enables users to set a specific time interval and view only the tests that were executed within that interval. This feature is useful for pinpointing tests conducted during specific periods, which can be helpful for analyzing test performance during critical phases or identifying patterns during certain time frames.
Timeline
section provides a temporal perspective on test execution, making it easier to comprehend the progression of activities and quickly identify any potential patterns or issues that may impact the quality or efficiency of the tests. The inclusion of these filters enhances the user experience, facilitating more focused analysis and aiding in continuous improvement efforts during the testing process.
The Behaviors
section provides detailed information about the behavior of the tests and how they interact with the application.
This section is particularly relevant to understand how the tests communicate with the application under test and what actions they perform during their execution. Here, specific details about the requests and responses made during the tests are presented, as well as any interaction with the user interface.
Among the information that can be found in this section are:
-
Request Details: Detailed information about the requests sent from the tests to the application is shown. This may include data such as the type of request (e.g., GET, POST, PUT, DELETE), request headers, parameters, and request body, if applicable.
-
Response Details: Details of the responses received from the application in response to the test requests are provided. This may include the HTTP status code, response headers, and response content, which helps verify if the application responds correctly to the tests.
Having access to this detailed information in the "Behaviors" section allows for a better understanding of how the tests interact with the application and how the software behaves under different test scenarios. This information is valuable for diagnosing issues, verifying compliance with requirements, and ensuring that the application functions correctly in diverse situations.
In the Packages
section, tests are organized into packages or modules. These packages can represent specific components of the application or functional areas. This section helps to understand how tests are grouped and organized in relation to the application's structure.
The main purpose of this section is to provide a hierarchical view of the test organization. Tests are grouped into logical units or packages based on their functionality or relevance. For example, tests related to a particular module or feature may be grouped together within a package, making it easier to navigate and manage related tests.
This hierarchical organization helps teams quickly identify and access tests related to specific components of the application. It also facilitates test maintenance and ensures a more structured and organized testing process.
Furthermore, the "Packages" section allows users to access specific information about each package. This may include details about the number of tests within a package, the overall status of the tests within the package, and any additional metrics or data associated with that group of tests.
By organizing tests into packages, teams can better understand the distribution of tests across different parts of the application. This information is valuable for ensuring comprehensive test coverage and identifying any potential gaps in testing for specific components or functional areas.