Using Analysis Reports
qTest Insights provides several pre-built, cross-project reports that can be viewed from three different perspectives including quality, coverage, and velocity.
- From the global filter, you can multi-select values for the projects, test cycles, modules, and releases to filter your analysis report data. Any change to the criteria will affect the entire report view.
- You can set a global value to automatically refresh data at your expected interval (every 30, 60, 90 or 120 seconds).
By clicking the Refresh Now button, you can automatically reload your reports with latest data.
3. Selecting the Save Report button will all you to save each analysis report to the Manage Reports section of qTest Insights. This will allow you to manage or schedule the report later on.
Check the saved reports article on how to manage and schedule report.
Below are some helpful tips for navigating the Quality Analysis report.
Open the menu Analysis and select Quality to open the Quality Analysis page.
- There are three quality charts:
- Latest Test Run Results: the results from the latest runs of each test case.
- Test Results by Day: a breakdown of the test run results by day.
- Defect Status: shows the distribution of defects by status.
- Click on each tab to show data broken down by project, and test cycle.
Hover to See Data
Example: When you hover over the green pie slice, a small pop-up will display telling you that there are 30 defects.
Click to Drill-down for Details
Check this article for things you are able to do on a drill-down page.
The Coverage Analysis report helps you determine test case and execution coverage as it related to requirements and application areas. Below are some helpful tips on navigating the Coverage Analysis report.
Open the Analysis menu, click Coverage to open the Coverage Analysis page.
There are two coverage charts:
- Requirements Test Run Coverage: The heat map is setup so that the size of each box is based on the number of test runs and the color of the box is based on the number of failed runs. Dark green is the lowest number of failures and bright red is the highest number.
- Requirements Run Results: The results from the latest run of each test case covering the requirements.
Test Type: This is to filter out manual vs automation test run data for the below analysis data table.
Analysis Data Table: This table shows the test cases for each requirement along with the testing results and number of defects. The followings are some explanations for several fields in the table.
- Explorer Sessions: the count of explorer sessions associated with a specific test case and requirement. (An Explorer Session must be linked to a Test Run which then linked to a Requirement to be counted.)
- Explorer Time: total time spent in Explorer sessions associated to a specific test case and requirement
- Open Defects (qTest): the count of qTest defects whose status are either New, Reopened, or Assigned
- Severe Defects (qTest): the count of qTest defects whose severity are either Fatal, or Major
2. You can hover over an area to see data
Example: When you hover over the cell of Requirements Test Run Coverage, a small popup displayed telling you that the Requirement US_Login has been covered by 6 test runs and 3 of them failed.
3. Click on Heat Map to Drill-down to Details
Check this article for things you are able to do on Drilldown page
The Velocity Analysis reports show how fast the team is going and how much work might be left to do. Below are some helpful tips on navigating the Velocity Analysis report.
1. Velocity Analysis
Select Velocity from the menu Analysis to open the Velocity Analysis page.
There are three velocity metrics as follows:
- Test Planned vs. Executed: cumulative results by day for the # of test planned & executed & execution results.
- New Requirements and Test Cases: # of requirements and TCs (manual & auto) added by day.
- Defects Opened and Closed: # of defects opened and closed by day.
Analyzed Data Table: shows activities and results broken down by testers.
- Estimated Time Remaining: calculated based on the measured time spent by tester per test step and the number of test steps remaining.
- Forecasted Defects Remaining: calculated based on the number of defects found by tester per test run and the number of test runs remaining.
Explorer Sessions: the count of explorer sessions conducted by the tester
Explorer Time: total time of explorer sessions conducted by the tester
2. Hover Over Charts to See Data
Example: When you hover over a point on the Tests Planned vs. Executed chart, a small popup displayed telling you that there are 27 passed test runs on 8/25/2016.
3. Click Data Points of Charts to Drill-Down
Check this article for things you are able to do on the drill down page.